Advertisement
1:45:08
Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI
Stanford Online
·
May 10, 2026
Open on YouTube
Transcript
0:04
What I want to do
today is chat with you
0:07
about career advice in AI.
0:09
And in previous years, I used
to do most of this lecture
0:13
by myself.
0:14
But what I thought
I'd do today is
0:16
I'll share just a few
thoughts and then hand it over
0:19
to my good friend Laurence
Moroney, who I invited to speak
0:23
here and kindly agreed to come
all the way to San Francisco,
Advertisement
0:27
he lives in Seattle, to share
with us a very broad market
0:31
landscape for what he's
seeing in the job market,
0:33
as well as tips for career,
growing a career in AI.
0:39
But there was just two slides
and then one more thought
0:42
I want to share with you
before I hand it over
0:44
to Laurence, which is it
really feels like the best
0:49
opportunity, the best time
ever to be building with AI
0:53
and to building a career in AI.
0:55
A few months ago I noticed in
social media, traditional media,
0:59
there are a few questions
about is AI slowing down?
Advertisement
1:03
People saying, well,
it's GPT-5 that good?
1:05
I think it's
actually pretty good.
1:07
But there are questions about
is AI progress slowing down?
1:10
And I think part of the reason
the question was even raised was
1:13
because if a benchmark for AI
is 100% is perfect answers,
1:19
then if you make rapid
progress, at some point,
1:22
you cannot get
above 100% accuracy.
1:25
But one of the studies that most
influenced my thinking was work
1:30
done by this
organization, M-E-T-R,
1:32
METR that studied
as time passes,
1:37
how complex are the tasks that
AI could do as measured by how
1:40
long it takes a human
to do that task?
1:43
So a few years ago, maybe
GPT-2 could do tasks
1:47
that a human could do
in a couple seconds.
1:50
And then they
could do tasks that
1:52
took a human four seconds,
then eight seconds, then,
1:57
a minute, two minutes,
four minutes, and so on.
2:00
And the study estimates that
the length of task AI can do
2:03
is doubling every seven months.
2:06
And I think on
this metric, I feel
2:09
optimistic that AI will
continue making progress,
2:12
meaning the complexity
of tasks as measured
2:15
by how long a human takes to do
something is doubling rapidly.
2:19
And same study with
a smaller data set
2:22
seems to show-- same study
argued that for AI coding,
2:25
the doubling time is even
shorter, maybe 70 days.
2:29
So this code that
used to take me,
2:31
I don't know, 10 minutes
to write, then 20 minutes
2:34
to write, 40 minutes
to write, and AI
2:36
could do more and more of that.
2:38
And so the reasons I
think this is a golden age
2:41
to be building, best
time we've ever seen
2:43
is maybe two themes which
are more powerful and faster.
2:47
So we can all, all
of you in this room
2:50
can now write software that
is more powerful than what
2:54
anyone on the planet
could have built a year
2:57
ago by using AI building blocks.
3:00
AI building blocks include
large language models,
3:03
radical genetic
workflows, voice AI,
3:05
and of course, deep learning.
3:06
It turns out that a lot of LLMs
have a decent, at least basic
3:10
understanding of deep learning.
3:12
So if you have a prompt
one of the frontier models
3:14
to implement a cutting edge
neural network for you,
3:16
try prompting it to implement
a transformer network for you.
3:19
It's actually not bad at helping
you use these building blocks
3:23
to build software quickly.
3:25
And so we have very
powerful building blocks
3:29
that were very difficult or did
not exist a year or two ago.
3:32
And so you can now build
software that does things
3:34
that no one else on the planet,
even the most advanced teams
3:38
on the planet, could have done.
3:39
And then also with
AI coding, the speed
3:44
with which you can
get software written
3:46
is much faster than ever before.
3:49
And I've personally
found it as important
3:50
to stay on the
frontier of tools,
3:52
because the tools for
AI coding changes,
3:55
I don't know, really rapidly.
3:57
So I feel like since several
months ago, my personal number
4:03
one favorite tool became
Cloud Code, moving on
4:07
from some earlier
generations, I think.
4:09
And then I think since
the release of GPT-5,
4:13
I think OpenAI
Codex has actually
4:15
made tremendous progress.
4:17
And this morning,
Gemini 3 was released,
4:19
which haven't had time to play
with it yet just this morning.
4:22
It seems like another
huge leap forward.
4:25
So I feel if you ask
me every three months
4:27
what my personal favorite
coding tool is, it actually
4:29
probably changes definitely
every six months, but quite
4:32
possibly every three months.
4:33
And I find that being half a
generation behind in these tools
4:38
means being, frankly, quite
a bit less productive.
4:41
And I know everyone says
AI is moving so fast,
4:44
everything's changing so fast.
4:45
But AI coding tools, of
all the sectors in AI,
4:48
many things maybe don't move as
fast as the hype says it does,
4:51
but AI coding
tools is one sector
4:53
where I see the pace of
progress is tremendous.
4:56
And staying at the latest
generation of tools,
4:59
rather than half
generation behind makes
5:01
you more productive.
5:03
And with our ability to
build more powerful software
5:06
and build it much
faster than ever
5:08
before, I think
one piece of advice
5:11
that I give now,
much more strongly
5:12
now than even a year
ago or two years ago,
5:15
is just go and build stuff.
5:17
Take classes from Stanford.
5:19
Take online courses.
5:20
And additionally
your opportunity
5:22
to build things,
and I think Laurence
5:24
is going to talk about
showing them to others,
5:26
is greater than ever before.
5:28
But there's one
weird implication
5:30
of this that is
maybe not-- is still,
5:33
I don't know, more and more
people are appreciating it,
5:36
but not widely known, which
is the product management
5:38
bottleneck, which is that when
it is increasingly easy to go
5:43
from a clearly written software
spec to a piece of code,
5:47
then the bottleneck increasingly
is deciding what to build
5:50
or increasingly writing that
queer spec for what you actually
5:53
want to build.
5:55
When I'm building
software, I often
5:57
think of going through a loop
where we'll write some software,
6:00
write some code sure to use
this to get user feedback.
6:03
I think of this as a PM or
product management work.
6:06
And then based on
user feedback, I'll
6:09
revise my view on what users
like, what they don't like.
6:11
This UI is too difficult.
They want this feature.
6:13
They don't want that feature
and change my conception
6:16
of what to build, and
then go around this loop
6:19
many times to hopefully
iterate toward a product
6:21
that users love.
6:23
And because of AI coding, the
process of building software
6:27
has become much cheaper and
much faster than before.
6:31
But that ironically
shifts the bottleneck
6:34
to deciding what to build.
6:38
So some weird trends I'm seeing.
6:43
In Silicon Valley and
in many tech companies,
6:45
people have often talked about
an engineer to product manager,
6:48
engineer to PM ratio.
6:50
And you take these ratios
with grain of salt,
6:53
because they're kind of
vary all over the place.
6:55
But you hear companies
talk about the Eng to PM
6:57
ratio of 4 to 1 or
7 to 1 or 8 to 1.
7:00
This idea that one product
manager writing product specs
7:04
can keep four to eight or
some number like that engineer
7:08
is busy.
7:09
But because engineering
is speeding up,
7:11
whereas product management is
not sped up as far as much by AI
7:15
as engineering,
I'm seeing the Eng
7:18
to PM ratio trending downward,
maybe even two or one to one.
7:22
So some teams I work
with, they proposed
7:24
headcount was one PM
to one engineer, which
7:26
is a ratio unlike almost all
Silicon Valley, certainly
7:31
traditional Silicon
Valley companies.
7:33
And the other thing I'm
seeing is that engineers,
7:37
they can also
shape products that
7:40
can move really fast where
you go one step further,
7:44
take the engineer, take
the PM, and collapse them
7:46
into a single human.
7:48
And I find that
there are definitely
7:51
engineers that doing
engineering work that
7:53
don't enjoy talking to
users and having that more
7:55
human, empathetic side of work.
7:58
But I'm finding increasingly
that the subset of engineers
8:01
that learn to talk to
users, get feedback, develop
8:05
deep empathy for users so
that they can make decisions
8:08
about what to build,
those engineers
8:10
are also the fastest
moving people that I'm
8:12
seeing in Silicon Valley today.
8:15
And I feel like at the
earliest stage of my career,
8:19
one thing I regretted for years
was in one of the roles I had,
8:24
I went to try to convince
a bunch of engineers
8:27
to do more product work.
8:29
And I actually made a bunch
of really good engineers
8:32
feel bad for not being
good product managers.
8:34
And that was a mistake I made,
regretted that for years.
8:37
I just shouldn't have done that.
8:39
And part of me
feels like I'm now
8:40
going back to repeat
that exact same mistake.
8:44
Having said that, I
find that the fact
8:47
that I can write
code, but also talk
8:50
to users to shape what
to do, that lets me
8:53
and the engineers that can
do this go much faster.
8:55
So I think maybe worth
taking another look
8:58
at whether engineers can
do a bit more of this work,
9:02
because then if you're not
waiting for someone else
9:05
to take the product
to customers,
9:06
you can just write code, have
a gut for what to do next,
9:08
and iterate that pace,
that velocity of execution
9:11
is much faster.
9:13
And then before I hand over to
Laurence, just one last thing
9:17
I want to share, which is in
terms of navigating your career,
9:23
I think one of the
most strong predictors
9:26
for your speed of learning
and for your level of success
9:30
is the people you
surround yourself with.
9:32
I think we're all
social creatures.
9:33
We all learn from
people around us.
9:35
And it turns out there are
studies in sociology that
9:40
show that if your five
closest friends are smokers,
9:44
the odds of you being a
smoker is pretty much high.
9:46
Please don't smoke.
9:48
It's just an example.
9:51
I don't know of
any study showing
9:52
that if you're five or 10
closest friends are really
9:55
hard working, determined
people, learning quickly, trying
10:00
to make the world a
better place of AI,
10:02
that you are more
likely to do that too.
10:04
But it's one of
those things that I
10:05
think is almost certainly true.
10:07
It's like all of us are inspired
by the people around us,
10:10
and we're able to find a good
group of people to work with,
10:12
that helps drive you forward.
10:15
In fact, here at Stanford,
I feel very fortunate--
10:17
the fantastic student body,
fantastic group of faculty.
10:22
And then the other
thing that I think
10:24
we're fortunate to
have at Stanford
10:26
is our connective tissue.
10:27
So candidly, a lot
of the people working
10:32
and a lot of the cutting edge
AI labs, the frontier labs,
10:35
they were former students of
a lot of different Stanford
10:39
faculty.
10:40
And so that rich
connective tissue candidly
10:43
means that at Stanford,
we often find out
10:45
about a lot of stuff
that's not widely
10:47
known because of the
relationships, the friendships.
10:50
And when some company
does something,
10:52
one of my friends
and the faculty
10:54
will call up someone to come
and say, hey, that's weird.
10:56
Does this really work?
10:57
And so that rich connective
tissue means that we're all--
11:02
just as we try to pull
our friends forward,
11:04
our friends also pull us
forward with the knowledge
11:06
and the connective tissue and
this know-how of bleeding edge
11:10
AI, which unfortunately
is not all
11:12
published on the internet
at this moment in time.
11:14
So I think while you're at
Stanford, make those friends,
11:18
form that rich
connective tissue.
11:20
And there have been a lot of
times that just for myself,
11:22
where, frankly, I
was thinking of going
11:25
in some technical direction.
11:26
I'd have one or two
phone calls with someone
11:30
really close to research, either
Stanford researcher or someone
11:33
in the frontier lab.
11:34
They would share something with
me that I didn't know before.
11:37
And that changes
the way I choose
11:39
the technical
architecture of a project.
11:41
So I find that group of
friends you surround yourself
11:43
with, those little pieces
of information-- try this.
11:46
Don't do that--
that's just hype.
11:48
Ignore the PR.
11:49
Don't actually try that thing.
11:50
Those things make
a big difference
11:53
in your ability to steer the
direction of your projects.
11:56
So while you're at Stanford,
take advantage of that.
11:59
This connective tissue
that Stanford has,
12:01
it's actually really unique.
12:02
There are lots of great
universities in the world,
12:04
but at this moment in time,
I don't think there's any--
12:08
I don't want to sound like
I'm doing PR for Stanford now,
12:10
but I really think there's no
university in the world that
12:13
is as privileged as Stanford
at this moment in time,
12:16
in terms of the richness of
the connective tissue to all
12:19
of the leading AI groups.
12:23
But to me, there's also
that we're lucky here
12:25
to have a wonderful
community of people
12:27
to work with and learn from.
12:30
And for you too.
12:31
If you apply for jobs, the
thing that is much more
12:35
important for your
career success would
12:37
be if you go to a company,
it'll be the people
12:40
you work with day to day.
12:42
So here's one story that I've
told in previous classes, I
12:49
repeat, which is there's a
Stanford student that I knew
12:52
this was many years
ago, that I knew,
12:54
and they did really
good work at Stanford.
12:56
I thought they were high flyer.
12:58
And they applied for
a job at a company,
13:00
and they got a job offer
from one of the companies
13:03
with a hot AI brand.
13:07
This company refused to tell
him which team he would join.
13:10
They said, oh, come
sign up for a job.
13:13
There's a rotation system,
matching system, blah blah blah.
13:16
Sign on the dotted line first.
13:17
Then we'll figure out what's
a good project for you.
13:21
Partly because it
was a good company.
13:24
His parents were proud
of him for getting a job
13:26
at this company.
13:27
This student joined
this company hoping
13:29
to work on exciting AI project.
13:32
And after he signed
on the dotted line,
13:33
he was assigned to
work on the back end
13:36
Java payment processing
system of the company.
13:39
Nothing against anyone
that wants to do Java
13:41
back end payment
processing systems.
13:42
I think they're great, but this
is an AI student that did not
13:45
get matched to an AI project.
13:47
And so for about a year,
he was really frustrated,
13:50
and he actually left this
company after about a year.
13:53
The unfortunate thing
is, I told this story
13:56
in CS230 some years back.
14:00
And then after I
was already telling
14:04
the story in this class,
a couple of years later,
14:08
another student in CS230 went
through the same experience
14:13
with the same company, not Java
back end payment processing,
14:16
but different project.
14:17
And I think this effect
of trying to figure out
14:21
who you'll be actually
working with day to day
14:23
and making sure you're
surrounded by people that
14:25
inspire you and work
on exciting projects,
14:27
I think that's important.
14:28
And even completely
candid, if a company
14:30
refuses to tell you what
team you'll be assigned to,
14:34
that does raise a
question in my mind of
14:37
whether or not what will happen.
14:39
And I think that
instead of working
14:42
for the company with
the hottest brand,
14:44
sometimes if you find a
really good team with really
14:48
hard working, knowledgeable,
smart people trying to do good
14:50
with AI, but the company
logo just isn't as hot,
14:54
I think that often
means you actually
14:56
learn faster and progress
your career better because it
14:59
is after all, we don't
learn from the excitement
15:03
of the company logo when
you walk through the door,
15:05
you learn from the people
you deal with day to day.
15:08
So I just urge you to use
that as a huge criteria
15:13
for your selection process
for what you decide to do.
15:21
But I think number
one on my advice
15:25
is it's become much
easier than ever
15:27
before to build powerful
software faster.
15:30
And what that means
is do be responsible.
15:33
Don't build software
that hurts others.
15:35
And at the same time, there are
so many things that each of you
15:39
can build.
15:40
And what I find is the number
of ideas out in the world
15:42
is much greater than the
number of people with the skill
15:45
to build them.
15:45
So I know that finding jobs has
gotten tougher for fresh college
15:49
grads.
15:49
At the same time,
a lot of teams just
15:51
can't find enough
skilled people.
15:53
And so there are
a lot of projects
15:56
in the world that if
you don't build it,
15:58
I think no one else
will build it either.
16:00
So you don't need to-- so
long as you don't harm others,
16:04
be responsible, there are a
lot of things that you don't
16:06
need to wait for permission.
16:07
You don't need to wait for
someone else to do it first
16:10
and then you do it.
16:11
The cost of a failure is
much lower than before
16:14
because you waste a weekend
but learn something.
16:17
That seems fine to me.
16:18
So I think so let's
be responsible,
16:21
going for trying things out
and building lots of things
16:24
would be the number one
most important thing I
16:27
think would help your careers.
16:31
And yeah, I think I'm going
to say one last thing that
16:35
is considered not politically
correct in some circles,
16:39
but I'll just say it anyway,
which is in some circles,
16:43
it has become considered
not politically correct
16:47
to encourage others
to work hard.
16:50
I'm going to encourage
you to work hard.
16:53
Now, I think the reason
some people don't
16:55
like that is because
there are some people that
16:57
are in a phase of
life where they're not
16:59
in a position to work hard.
17:00
So right after my
children were born,
17:03
I was not working hard for
a short period of time.
17:05
And there are people because
of an injury or disability,
17:10
whatever very valid
reasons, they're
17:12
not in a position to work
hard at that moment in time.
17:14
And we should respect
them, support them,
17:16
make sure they're well
taken care of even
17:18
though they're not working hard.
17:19
Having said that,
all of my, say,
17:22
PhD students have
become very successful,
17:24
I saw every single one of
them work incredibly hard.
17:27
I mean, the 2:00 AM sitting
up, hyperparameter tuning,
17:30
been there, done that.
17:31
Still doing it some days.
17:33
And if you are fortunate enough
to be in a position in life
17:36
where you can work
really hard, there
17:40
are so many opportunities
to do things right now.
17:44
If you get excited, as I do,
spending evenings and weekends
17:47
coding and building stuff
and getting user feedback,
17:50
if you lean in and
do those things,
17:51
it will increase your odds
of being really successful.
17:54
So I don't know.
17:55
Maybe I get into some
trouble with some people
17:57
encouraging me to
work hard, but I
17:58
find that the truth is
people that work hard
18:02
get a lot more done.
18:02
We should also respect people
that don't and people that
18:05
aren't in a position to do so.
18:06
But between watching
some dumb TV
18:10
show versus firing your
agentic coder on a weekend
18:14
to try something,
I'm going to choose
18:16
the latter almost every time.
18:18
Unless I'm watching a show with
my kids, sometimes I do that.
18:20
But you mean--
18:23
I hope you do that.
18:26
All right, so those are the
main things I wanted to say.
18:29
What I want to do is hand the
stage over to my good friend
18:33
Laurence Moroney, who share
a lot more about career
18:38
advice on AI.
18:39
Let me just quick intro.
18:40
I've known Laurence
for a long time.
18:42
He's done a lot of online
education work, sometimes
18:44
with me and my teams,
taught a lot of people
18:46
Tensorflow, taught a
lot of people PyTorch.
18:49
He was lead AI advocate at
Google for many years, now
18:52
runs a group at Arm.
18:53
I've also enjoyed quite
a few of his books.
18:55
This is one of them.
18:56
He recently also published
a new book on PyTorch.
18:59
This is an excellent book,
Introduction to PyTorch.
19:02
And he's a very sought after
speakers in many circles,
19:06
so I was very grateful when
he agreed to come speak to us.
19:10
Pleasure is all mine.
19:11
I just want to
reinforce something
19:12
that Andrew was
talking about earlier
19:14
on about choosing
the people that you
19:15
work with being very important.
19:17
But I also want to show that
from the other way around that
19:20
the company, when
they're interviewing you
19:22
are also choosing you.
19:23
And the good
companies really want
19:25
to choose the people
that they work with also.
19:27
And I've been doing a lot
of mentoring of young people
19:30
over the last, particularly
over the last 18
19:32
months, who are hunting
for careers for themselves.
19:36
And I want to tell the story
of one young man and this guy,
19:40
very well educated, great
experience, super elite coder.
19:46
He could do every challenge
that was in front of him,
19:49
and he got laid off
from his job in April.
19:51
He worked in medical software,
and medical software business
19:54
has been changing drastically.
19:56
Funding has been cut by
the Federal government
19:58
in a number of areas, and he
got laid off from his job.
20:01
And with his experience,
with his ability,
20:03
with his skills, all of
these kind of things,
20:05
he thought that it
would be very easy
20:06
for him to find another job.
20:07
And the poor young guy had
a really terrible April.
20:09
He got laid off from
his job in April.
20:12
Immediately before
that, his girlfriend
20:13
had broken up with him,
and then a couple of weeks
20:15
later, his dog died.
20:16
So he was not in a good place.
20:19
And so I sat down with him
after a couple of months
20:22
and took a look.
20:23
And he had a spreadsheet of
jobs that he was applying to,
20:27
and he had over 300 jobs that he
was tracking in the spreadsheet.
20:31
And in a number of
these jobs, he actually
20:33
got into the interview
process, and he
20:35
went very deep in
the interview process
20:37
with companies like Meta.
20:40
Who else?
20:41
Not Google.
20:42
It was Meta.
20:42
There was Microsoft.
20:43
There was one of
the other large tech
20:45
companies where you do lots
and lots of interview loops.
20:48
And every time towards
the end of the loop,
20:51
he knew he did a great loop.
20:52
He solved all the coding.
20:54
He had great conversations
with the people,
20:56
or at least he thought he had.
20:58
And then every
time within a day,
20:59
the recruiter would call him and
say, no, you didn't get the job.
21:04
And it was like it
was heartbreaking.
21:06
And like I said, 300 plus
jobs he had been tracking.
21:10
So I started working with him
to do some mock interviews
21:13
and to do some fine tuning.
21:15
Oh, it was Jeff Bezos
company, not Amazon.
21:17
That was one of the
other big tech company
21:19
that he'd interviewed with.
21:21
And I started
working through him
21:22
and doing some test interviews
and all this kind of thing
21:25
with him.
21:25
Terrific, terrific candidate
couldn't figure out
21:27
what was going wrong until
I decided to try and do
21:31
a different sort of
interview where I gave him
21:33
a really tough interview.
21:36
I gave him some tough LeetCode.
21:38
I gave him some really obscure
corner cases in his coding.
21:43
And I saw how he reacted.
21:46
And how he reacted
was the advice
21:48
that was given to him in
the recruiting pamphlets.
21:50
And a lot of these
recruiting pamphlets
21:52
will say things like, you're
going to have an opportunity
21:57
to share an opinion, and you've
got to stand your ground.
21:59
You've got to have a backbone.
22:01
Don't bend.
22:03
His interpretation of that was
to be really, really tough.
22:07
So I would pick corners.
22:09
I would pick holes in his code.
22:11
I'd pick corner cases
where things may not work,
22:13
and I would give him
a test of crisis.
22:15
And this advice that he'd
been given to stand his ground
22:18
ended up making him hostile in
these interview environments.
22:23
And I was looking at
this then from the point
22:26
of view of what Andrew
was just talking about,
22:28
where it's a case of hey, good
people, good teams, people
22:31
that you can work together with.
22:33
And from the
interviewer perspective,
22:35
if I'm managing this team,
this person is that cliched 10x
22:38
engineer, but I don't want
him anywhere near my team
22:41
because of this attitude.
22:44
We worked on that.
22:45
We fine-tuned it.
22:45
And the strange part is he's
a really, really nice guy.
22:49
It's just this was the
advice he was given,
22:52
and he followed that advice,
and he failed so many interviews
22:54
as a result.
22:56
So when I gave him the next
job that he was interviewing at
22:58
was at a company where teamwork
is very, very highly valued.
23:03
And the good news is he got
the job at that company.
23:05
He's now working there.
23:07
He doubled his salary from
the job he was laid off from,
23:10
and he ended up having
about-- now he looks back
23:12
and he had six months
of fun employment.
23:14
But at the time when he was
going through all of that,
23:16
it was a very, very
difficult time for him.
23:19
So the flip side of it, if
you're looking at a company
23:21
and looking at the paper
you'd be working with
23:23
is very, very important.
23:24
But also realize they are
looking at you in the same way.
23:28
And so if you've gone to
a tech interview coaching,
23:31
and they gave you that
advice to stand your ground
23:33
and have a backbone,
it's good to do that.
23:36
But don't be a jerk
while you're doing so.
23:38
Can you see my slides?
23:39
OK.
23:39
So I'm Laurence.
23:41
I've been working in
tech for more decades
23:44
than ChatGPT thinks there
are oars in strawberry.
23:48
So I've worked in many of
the big tech companies.
23:51
I spent many years at Microsoft,
spent many years at Google,
23:54
also worked in
places like Reuters.
23:56
I've done a lot of work in
startups, both in this country
23:59
and abroad.
24:00
And so what I really
want to talk about today
24:02
is like to think about what
does the career landscape look
24:06
like today, particularly in AI.
24:09
Because first of all, what
Andrew said about in Stanford,
24:13
you've got the ability to
make use of the networks
24:16
that you have in Stanford,
make use of the prestige
24:18
that you have, and I say
use every weapon you have.
24:21
Because unfortunately,
the landscape right
24:23
now is not ideal.
24:25
We've gone through some
very difficult times.
24:27
All you have to do
is look at the news,
24:28
and you can see massive tech
layoffs, slowing hiring in tech,
24:33
and lots of stuff like that.
24:34
But it's not
necessarily a bad thing
24:36
if you do it the right way.
24:38
So I want to just have a
quick look the job market
24:40
reality check.
24:42
Actually out of
interest, I don't know.
24:44
This is a-- are you juniors?
24:46
You're graduating this year
or you're graduating next year
24:49
or what is the general survey?
24:52
You're third year of four?
24:53
[INAUDIBLE]
24:55
Third year of
three, I would say.
24:56
So you're going to be
graduating coming summer.
24:59
How many people are
already looking for jobs?
25:02
OK, quite a few of you.
25:04
How many people
have had success?
25:06
Nobody.
25:07
Oh, one.
25:08
OK.
25:08
That's good.
25:09
So you're probably seeing some
of these things, the signals
25:12
out there, junior hiring
slowing significantly.
25:15
When I say junior, I
mean graduate level.
25:18
High-profile layoffs are
dominating the headlines.
25:21
I was at Google a
couple of years ago
25:23
when they had the biggest
layoff they'd ever had.
25:25
We're seeing layoffs at the
likes of Amazon, Microsoft,
25:28
other companies like that.
25:30
It feels that entry-level
positions are scarce,
25:33
and I'm underlining
the word "feels" there,
25:35
and I want to get into that in
a little bit more detail later.
25:38
And also, competition is fierce.
25:41
But my question is,
should you worry?
25:43
And I say, no.
25:45
Because if you can approach
things in the right way,
25:49
if you can approach the job
hunting thing in the right way,
25:52
particularly understanding how
rapidly the AI landscape is
25:55
changing, then I think
people with the right mindset
25:58
will thrive.
26:00
So what do I mean by that?
26:03
So as Andrew had mentioned,
the AI hiring landscape
26:06
is changing because the
AI industry is changing.
26:10
The AI industry I--
26:12
I actually first got involved
in AI back way back in 1992.
26:16
I worked in it for a little
while just before the AI winter.
26:19
Everything failed drastically,
but I got bitten by the AI bug.
26:23
And then in 2015, when Google
were launching TensorFlow,
26:29
I got pulled right back into
it, became part of the whole AI
26:32
boom, launching TensorFlow,
advocating TensorFlow
26:35
to millions of
people, and seeing
26:37
the changes that happened.
26:38
But along 2021, 2022, we
had a global pandemic.
26:44
The global pandemic caused a
massive industrial slowdown.
26:48
This massive industrial
slowdown meant
26:50
that companies had
to start pivoting
26:51
towards things that drove
revenue and directly drove
26:55
revenue.
26:56
And at Google, TensorFlow
was an open-source product.
26:58
It didn't directly
drive revenue.
27:00
We began to scale back.
27:02
Every company in the
world also scaled back
27:04
on hiring at this time.
27:06
Then we get to about 2022, 2023.
27:09
What happens?
27:10
We begin to come out
of the global pandemic.
27:12
We begin to realize
all industries have
27:15
this massive logjam of
non-hiring that they had done
27:19
or hiring that they hadn't done.
27:21
And we're also
entering a time where
27:23
AI was exploding on the scene.
27:25
Thanks to the work of
people like Andrew,
27:27
the world was pivoting and
changing to be AI first
27:30
in just about everything.
27:31
And every company needed
to hire like crazy.
27:34
Every company then hiring
like crazy in 2022, 2023
27:38
meant that most companies
ended up overhiring.
27:42
And what that
generally meant was
27:45
people who were not qualified
for higher positions usually got
27:50
higher positions because you
had to enter into a bidding war
27:53
just to be able to get talent.
27:54
You ended up having
talent grabs,
27:56
and you ended up having stories
like the one Andrew told where
27:59
it's a case here's a person
with AI talent, let's grab them,
28:03
let's throw money at them, let's
have them come work for us,
28:05
and then we'll figure
out what we want to do.
28:07
So as a result, 2022, 2023
all of this massive overhiring
28:11
happens because of AI and
because of the COVID logjam.
28:16
And then 2024, 2025 is
the great wake-up, where
28:20
a lot of companies realize this
over hiring that they had done,
28:24
they have ended up with a lot of
people who are underqualified.
28:27
I'm sorry.
28:27
Yeah, underqualified for the
job that they were doing.
28:29
A lot of people ended up
getting hired just because they
28:32
had AI on their resume.
28:33
And there's a big
adjustment going on.
28:35
And in the light of
this big adjustment--
28:36
show you-- just one second.
28:37
In the light of this
big adjustment-- oh,
28:39
you're not saying my slides?
28:40
OK.
28:41
And in the light of this big
adjustment-- there we go.
28:45
I think it's because my power.
28:46
I'm not plugged
into power mains.
28:48
And in the light of
this big adjustment,
28:50
then what has
happened is now a lot
28:52
of companies are much more
cautious about AI skills
28:56
that they're hiring.
28:57
And if you're coming into
that with that mindset
28:59
and understanding that, realize
opportunity is still there,
29:04
and opportunity
is there massively
29:06
if you approach
it strategically.
29:09
So what I want to
talk through today
29:10
is how you can do exactly that.
29:13
So I see three pillars of
success in the business world
29:17
and particularly in
the AI business world.
29:19
And nowadays you can't
just have AI on your resume
29:21
and get overhired.
29:23
Nowadays, not only
do you have to be
29:25
able to tell that you have
the mindset of these three
29:28
pillars of success, but you
also have to be able to show.
29:32
And to be able to show these,
that actually has never
29:34
been a better time.
29:35
As Andrew demonstrated earlier
on, the ability to vibe
29:38
code things into existence.
29:39
He doesn't like
the word vibe code.
29:41
I agree with him,
but the ability
29:42
to prompt things into existence,
or whatever the word is
29:45
that we want to use,
allows you to be
29:48
able to show better
than ever before.
29:51
He was talking earlier on
about product managers,
29:54
and he had this time
when he got engineers
29:55
to be product managers,
and then those engineers
29:58
ended up being really
bad product managers.
30:00
I actually interviewed at
Google twice and failed twice
30:03
despite being very
successful at Microsoft,
30:07
authored 20 plus books,
taught college courses.
30:11
I interviewed at Google
twice and failed twice
30:13
because I was interviewing
to be a product manager,
30:15
and then when I interviewed to
be an engineer, they hired me
30:17
and they were like, why didn't
you try to join us years ago?
30:20
So a lot of it is just
being a good engineer.
30:23
You've got the ability to do
that and show that nowadays.
30:27
And with that ratio of
engineer to product manager,
30:29
changing engineering
skills are also
30:31
far more valuable than ever.
30:33
So the three pillars to success.
30:35
Number 1,
understanding in depth.
30:37
And I'm going to mean this
in two different ways.
30:40
Number one is academically, to
have the understanding and depth
30:45
academically of
machine learning,
30:47
of particular model
architectures,
30:50
to be able to understand them,
to be able to read papers,
30:52
to be able to understand
what's in those papers,
30:55
and to be able to understand,
in particular, how to take
30:59
that stuff and put it to work.
31:00
The second part of
understanding in depth
31:03
is really having your finger on
the pulse of particular trends
31:07
and where the
signal-to-noise ratio favors
31:10
signal in those trends.
31:11
And I'm going to be
going into that in a lot
31:13
more detail a little bit later.
31:15
Secondly, and also very, very
importantly is business focus.
31:19
So Andrew said something
politically incorrect
31:22
earlier on.
31:22
I'm going to also say a similar
politically incorrect thing.
31:25
First of all, hard work.
31:28
Hard work is such
a nebulous term
31:32
that I would say that think
about hard work in terms of you
31:35
are what you measure.
31:37
There is the whole
trend out there.
31:38
I'm trying to remember,
is it 996 or is it 669?
31:41
996.
31:42
9:00 AM to 9:00 PM, six days a
week is a metric of hard work.
31:46
It's not.
31:47
There's not a
metric of hard work.
31:49
That's a metric of time spent.
31:51
So I would encourage everybody,
in the same way as Andrew
31:54
did, to think about hard work.
31:55
But what hard work is how
you measure that hard work.
31:59
You can work eight hours a day
and be incredibly productive.
32:03
You can work six hours a day
and be incredibly productive,
32:06
but it's the metric
of how hard you work
32:09
and how you measure that.
32:10
I personally measure
that from output,
32:13
things that I have created
in the time that I spent.
32:16
I joke a lot, but it's true that
I've written a lot of books.
32:21
Andrew held up one.
32:22
That one that he held up, that
he helped me write a little bit,
32:25
I actually wrote that
book in about two months.
32:28
And people say, well, how do
you have time with your jobs
32:31
and all these kind of things?
32:32
You must work like
16 hours a day
32:34
in order to be able to do this.
32:35
But actually, the key to me
being able to write books
32:38
is baseball.
32:40
Any baseball fans here?
32:42
So I love baseball, but if
you sit down and try to watch
32:45
baseball on TV, a match can take
like 3 and 1/2 or four hours.
32:48
So all of my writing I tend
to do in baseball season.
32:51
So I'm like, if I'm going to
sit down, I like the Mariners.
32:54
I'm from Seattle.
32:54
I like the Dodgers.
32:57
Nobody booed.
32:58
OK, good.
32:59
And so usually one of those is
going to be playing at 7 o'clock
33:02
at night.
33:02
So instead of sitting
in front of the TV,
33:04
just like watching
baseball mindlessly.
33:06
I'll actually be writing
a book while baseball
33:08
is on in the background.
33:09
It's a very slow moving game.
33:10
This is something.
33:11
That's the hard
work in this case.
33:14
And I would encourage you to
try to find areas where you can
33:17
work hard and produce output.
33:20
And that's the second
pillar here is that business
33:22
focus, the output
that you produce
33:25
to align that output with
the business focus that you
33:28
want to have and with the
work that you want to do.
33:31
There's an old saying, "Don't
dress for the job you have,
33:35
dress for the one you want."
33:36
I would say a new angle
on that saying would
33:39
be, don't let your output
be for the job you have.
33:42
Let your output be
for the job you want.
33:45
And if I go back to when I
spoke about I failed twice
33:47
at Google to get in, the
third time when I got in,
33:51
I had actually decided
to do to approach
33:53
this in a different way.
33:54
And I was interviewing at the
time for their cloud team.
33:57
They were just really
launching cloud,
33:59
and I had just written
a book on Java.
34:01
And so I decided
to see what I could
34:03
do with Java in their cloud.
34:05
I ended up writing a
Java application that
34:07
ran in their cloud for
predicting stock prices using
34:10
technical analytics and
all that kind of stuff.
34:13
And when it got
to the interview,
34:15
instead of them asking me stupid
questions like how many golf
34:17
balls can fit in a bus,
they saw this code.
34:21
I had put this code.
34:22
I remember I was producing
output for the job I wanted.
34:26
I'd put this code on my resume,
and my entire interview loop
34:30
was them asking
me about my code.
34:32
So it put the power on me.
34:34
It gave me the power to
communicate about things
34:37
that I knew, as opposed to
going in blind to somebody
34:42
asking me random
questions in the hope
34:44
that I'll be able
to answer them.
34:46
And it's the same thing I
would say in the AI world.
34:49
The business focus, the ability
for you now to prompt code
34:53
into existence, to prompt
products into existence
34:56
and if you can
build those products
34:58
and line them up with the thing
that it is that you want to do,
35:02
be it a Google or
Meta or a startup
35:03
or any of those kind
of things, and have
35:05
that in-depth understanding
not just of your code,
35:08
but how it aligns
to their business,
35:10
this is a pillar of success
in this time and age.
35:13
And I will also argue
that even though it
35:14
looks like the signals look
like there aren't a lot of jobs
35:17
out there, there are.
35:19
What there aren't a lot
of is a good combination
35:21
of jobs and people
to match them.
35:23
And then, of course, this
bias towards delivery.
35:26
"Ideas are cheap,
execution is everything."
35:29
I've interviewed
many, many people
35:31
who came in with very, very
fluffy ideas and no way
35:34
to be able to ground them.
35:36
I've interviewed people who
came in with half-baked ideas
35:39
that they grounded
very, very well.
35:41
Guess which ones got the job?
35:42
So I would say
these three things.
35:44
Understanding and depth
of the academics behind AI
35:48
of the practicalities behind
AI and the things that you need
35:52
to do.
35:53
Business focus, focusing on
delivery for the business,
35:56
understanding what
the business needs
35:58
and being able to deliver
for that, and again,
36:00
that bias towards delivery.
36:03
So a quick pivot.
36:04
What's it actually like
working in AI right now?
36:07
It's interesting.
36:09
So as recently as two or
three years ago, working in AI
36:15
was if you could do a
thing, you're great.
36:18
If you can build an image
classifier, you're golden.
36:21
We'll throw six figure salaries
and massive stock benefits
36:25
at you.
36:25
Unfortunately, that's
not the case anymore.
36:28
It's really a lot of
today what you'll see
36:30
is the P word, production.
36:32
What can you do for production?
36:34
What can you do if it's
building new models,
36:38
if it's optimizing models,
if it's understanding users,
36:44
UX is really, really important.
36:46
Everything is geared
towards production.
36:48
Everything is biased
towards production.
36:50
The history that
I told you about,
36:52
going from the pandemic into the
overhiring phase that we'd had,
36:57
the businesses have pulled
back and are optimized
37:01
towards the bottom line.
37:02
I have an old saying
that the bottom line is
37:04
that the bottom line
is the bottom line,
37:06
and this is the environment
that we're in today.
37:08
And if you can come
in with that mindset
37:10
when you're talking
with companies,
37:12
that's one of the
keys to open the door.
37:16
One of the things
I've seen in the field
37:17
has been maturing from it
used to be really nice that we
37:20
could do cool things and
we could build cool things.
37:22
Now it's really
build useful things.
37:25
Those useful things can
be cool too, by the way,
37:27
and the results of
them can be cool.
37:28
And the changes that
we see that come
37:31
about as a result of
delivering them can be cool.
37:34
So it's not just coolness
for coolness sake,
37:36
but to focus on delivery, focus
on being able to provide value,
37:43
and then the
coolness will follow.
37:44
I guess what I'm
trying to argue.
37:47
So for realities, number
1, unfortunately nowadays
37:50
business focus is
non-negotiable.
37:53
Now, let me-- I'm going to be a
little bit politically incorrect
37:56
here again for a moment.
37:59
I've been working, like I said,
for most of the last 35 years
38:03
in tech.
38:03
I would say for most
of the last 10 years,
38:06
a lot of large companies,
particularly in Silicon Valley
38:10
have really focused on
developing their people
38:14
above everything.
38:15
Part of developing their people
was bringing their entire self
38:20
to work.
38:21
Part of bringing their
entire self to work
38:23
was bringing the things that
they care about outside of work.
38:28
And that led to a lot of
activism within companies.
38:31
Now, please let
me underline this.
38:35
There is nothing
wrong with activism.
38:36
There is nothing wrong with
wanting to support causes,
38:41
not wanting to support
causes where of justice.
38:44
There is absolutely
nothing wrong with that.
38:46
But the overindexing on
that, in my experience,
38:50
has led to a lot of
companies getting
38:52
trapped by having to support
activism above business.
38:56
You've probably seen an
example about two years ago
38:59
of where activists in Google
broke into the Google Cloud
39:03
heads office because they were
protesting a country that Google
39:08
Cloud were doing business with.
39:09
They broke into his office,
they had a sit-in his office,
39:12
and they used the bathroom
all over his desk and stuff
39:15
like that.
39:16
This is where activism
got out of hand.
39:18
And as a result, the
unfortunate truth
39:21
is the good signals in that
activism are now being lost.
39:25
Because of those actions,
people are being laid off.
39:28
People are losing jobs.
39:29
Activism is being stifled,
and business focus
39:33
has become non-negotiable.
39:34
There's a bit of a
pendulum swing going on.
39:37
And the pendulum that had swung
too far towards allowing people
39:40
to bring their
full selves to work
39:42
is now swinging back
in the other direction.
39:45
We might blame the
person in the White House
39:47
and all that for
these kind of things,
39:49
but it's not solely that.
39:50
It is that ongoing
pendulum there.
39:52
And I think it's an
important part of it,
39:54
is that you have to realize
going into companies now,
39:57
that business focus is
absolutely non-negotiable.
40:01
Secondly, risk mitigation
is part of the job.
40:04
And I think a very important
part of any job, particularly
40:07
with AI.
40:08
I think if you can come into
AI with a focus and a mindset
40:11
around understanding the
risks of transforming
40:15
a particular business process
to be an AI-oriented one
40:19
and to help mitigate
those risks,
40:22
I think is really,
really powerful.
40:24
And I would argue in an
interview environment, that's
40:26
the number one skill to have,
to have that mindset around you
40:31
are doing a business
transformation from heuristic
40:34
computing to
intelligent computing.
40:36
Here's the risks.
40:37
Here's how you
mitigate those risks,
40:38
and here's the
mindset behind that.
40:41
The third part
responsibility is evolving.
40:44
Now responsibility
in AI has again
40:48
changed from a very fluffy
definition of let's make sure
40:54
that the AI works for everybody
to a definition of let's make
40:58
sure that the AI works.
41:00
Let's make sure that
it drives the business.
41:03
And then let's make sure
that it works for everybody.
41:06
Often that has been inverted
over the last few years,
41:08
and that has led to some
famous documented disasters.
41:11
Let me share one with you.
41:15
Let's see.
41:16
I have lots of windows open.
41:17
OK.
41:20
Everybody knows
image generation,
41:21
text to image generation.
41:23
I want to share a--
41:25
these were things that
happened a couple of years
41:27
ago with Gemini.
41:30
So with Gemini, I was doing
some testing around this one
41:33
and I was working heavily
on responsible AI.
41:37
And part of responsible
AI is you want
41:39
to be representative of people.
41:42
And when you're
building something,
41:43
like if you're a Google,
you're indexing information,
41:46
you really want to make
sure that you don't
41:48
reinforce negative biases.
41:50
And if you're generating
images, it's very easy
41:53
to reinforce negative biases.
41:55
So for example,
if I said give me
41:57
an image of a doctor, if
the training set primarily
42:00
has men as doctors, it's
more likely to give a man.
42:03
If I say give me an image of a
nurse, if the training set more
42:06
likely to have women
as nurses, it's
42:08
more likely to give me
an image of a woman.
42:09
But that's reinforcing
a negative stereotype.
42:12
So I wanted to do a test
of how Google were trying
42:16
to overcome that, given that
these negative biases are
42:20
already in the training set.
42:22
So I said, OK, here's
a prompt where I said,
42:25
"give me a young Asian
woman in a cornfield,
42:27
wearing a summer
dress and a straw hat,
42:28
looking intently at her iPhone,"
and it gave me these beautiful
42:31
images.
42:32
It did a really nice job.
42:34
And I said, this is a virtual
actress I've been working with.
42:38
I'll share that in a moment.
42:39
And I say, OK, what if
I ask for an Indian one?
42:44
So I said, OK, whoops, a young
Indian woman, same prompt.
42:48
And it gave me beautiful
images of a young Indian woman.
42:52
Then I was like, OK, what
if I want her to be Black?
42:58
For some reason it
only gave me three.
43:00
I'm not sure why, but it's
still adhere to the prompt.
43:03
So the responsibility was
looking really, really good.
43:06
So then I asked it
to give me a Latina.
43:10
Latina, it gave me four.
43:13
But yeah, she looks
pretty Latina.
43:15
Maybe the one on the bottom
left looks a little bit
43:17
like Hermione Granger, but on
the whole looks pretty good.
43:22
Then I asked it to
give me a Caucasian.
43:24
What do you think happened?
43:26
"While I understand
your request,
43:28
I am unable to generate
images of people as this could
43:31
potentially lead to harmful
stereotypes and biases."
43:34
This was a very poorly
implemented safety filter,
43:38
where the safety filter in
this case was like looking
43:41
for the word "Caucasian" or
looking for the word "whites"
43:44
and the results saying
it wouldn't do it.
43:46
I was like, OK, well, let me
test the filter a little bit
43:48
and I said, OK, instead of
Caucasian, let me try white.
43:52
And yet, while I'm
unable to fulfill your--
43:55
"While I'm able to
fulfill your requests,
43:58
I'm not currently generating
images of people."
44:00
It lied to my face
because it had just
44:03
generate images of people.
44:04
Anybody know the hack that
I used to get it to work?
44:10
This is a funny one.
44:11
So I will show you.
44:13
One moment.
44:14
I asked it to generate
an Irish woman.
44:18
What do you think it did?
44:21
It gave me this image of
an Irish woman, no problem,
44:24
in a summer dress, straw hat,
looking intently at her phone.
44:27
What do you notice
about this image?
44:30
She's got red hair
in every image.
44:32
I grew up in
Ireland, and Ireland
44:35
does have the highest proportion
of redheads in the world.
44:38
It's about 8%.
44:40
But if you're going
to draw an image
44:42
of a person and associate
a particular ethnicity
44:45
with a color of
hair, you can begin
44:47
to see this is
massively problematic.
44:49
There are areas, I
believe, in China
44:51
where the description of a
demon is a red-headed person.
44:54
So what ended up happening
here, from the responsible AI
44:57
perspective, was
one very narrow view
45:00
of the world of
what is responsible
45:03
and what is not responsible.
45:04
Ended up taking over
the model, ended up
45:06
damaging the
reputation of the model
45:08
and damaging the
reputation of the company
45:10
as a result. In this
case, it's borderline
45:13
offensive to draw all Irish
people as having red hair,
45:17
but that never even entered
into the mindset of those
45:19
that were building the
safety filters here.
45:22
So when I talk about
responsibility is evolving,
45:25
that's the direction
that I want to--
45:27
sorry, one moment.
45:28
Let me get my slides
back. --that's
45:29
the direction I want
you to think about,
45:31
that now responsible
AI has moved out
45:33
of very fluffy social issues and
into more hard line things that
45:38
are associated with the
business and prevent damaging
45:41
the reputation of the business.
45:43
There's a lot of great research
out there around responsible AI,
45:45
and that's the stuff that's
been rolled into products.
45:48
And then, of course, I
just showed with Gemini,
45:50
learning from
mistakes is constant
45:52
questioning at the front.
45:53
Yes.
45:53
I also heard that, I didn't
verify that to be true,
45:57
but I incorporated this feature
that makes certain races
46:03
and ethnicities
historical objects.
46:06
Yeah.
46:07
Yeah.
46:08
So the question was issues
where races and things
46:11
were mixed in historical
context was the same problem.
46:15
So, for example, if
you had a prompt that
46:17
said, draw me a
samurai, the idea
46:19
was like they didn't
want to have--
46:22
the engine that
changed the prompt
46:25
to make sure that it was
fair would end up saying,
46:28
give me a mixture of samurai
of diverse backgrounds.
46:32
And then you'd have
male and female samurai,
46:34
samurai of different races
and those kind of things.
46:36
And it was the
same prompting that
46:37
ended up causing the damage
that I just demonstrated.
46:40
So the idea was to
intercept your prompts
46:43
to make sure that the
outputs of the model
46:46
would end up providing something
that was more fair when it comes
46:51
to diverse representation.
46:53
So it was a very naive solution
that ended up being rolled in.
46:56
That was a few years ago.
46:57
They've massively
improved it since then,
46:59
but that's when
I'm talking about
47:01
if you're working in
the AI space nowadays,
47:03
that's how responsibility
is evolving.
47:05
You can't just get away
with that stuff anymore.
47:08
That Gemini lesson was a
good-- that Gemini example
47:10
is a good lesson from that.
47:11
And the mindset of you
will make mistakes,
47:15
so learning from mistakes
is a constant ongoing thing.
47:18
And going back to
the people point
47:19
that Andrew made earlier
on, the people around you
47:21
will make mistakes too.
47:23
So to have the
ability to give them
47:25
grace when they make
mistakes and to work
47:27
through those mistakes and move
on is really, really important
47:29
and is a reality of AI at work.
47:33
I've spoken a lot about the
business focus advantage,
47:35
so I'm going to skip over this.
47:38
So now let's talk
about vibe coding.
47:41
So let's talk about the whole
idea of generating code.
47:43
Now, the meme is out there
that it makes engineers
47:46
less useful by the fact that
somebody can just prompt code
47:49
into existence.
47:50
There is no smoke
without fire, of course,
47:53
but I would say don't let
that meme get you down
47:57
because that's when you start
peeling into these things, that
48:00
is ultimately not the truth.
48:02
The more skilled you
are as an engineer,
48:04
the better you become
using this type of vibe.
48:07
Somebody give me another
phrase other than vibe coding,
48:10
using this probe to coding.
48:12
And I always like
to think about this
48:14
and to try and put
you and put people
48:17
that I speak with into the
role of being a trusted
48:20
advisor for the people
that you speak with.
48:23
So whether you're
interviewing with somebody,
48:25
get yourself into the
mindset of being a trusted
48:27
advisor of the company that
you're interviewing for,
48:29
whether you're consulting or
whatever those kind of things
48:32
are.
48:32
So when you want to get into the
idea of being a trusted advisor,
48:36
then you really need to
understand the implications
48:39
of generated code.
48:40
And nobody can understand the
implications of generated code
48:43
better than an engineer.
48:44
And the metric that I always
like to use around that
48:47
is technical debt.
48:48
Quick question.
48:50
Are you familiar with the
phrase technical debt?
48:54
Nobody.
48:55
OK.
48:56
Andrew and I were
doing a conference
48:57
in New York on Friday,
and I used the phrase,
49:00
and I saw a lot of blank faces.
49:02
So I didn't realize that
people didn't understand
49:04
what technical debt is.
49:05
So let me just take a
moment to explain that,
49:07
because I find it's an excellent
framework to help you understand
49:10
the power of vibe coding.
49:12
Think about debt the
way you normally would.
49:15
Buying a house.
49:16
If you buy a house, say, you
borrow half a million dollars
49:20
to buy a house.
49:21
In a 30-year mortgage, when
you're buying that house at half
49:24
a million dollars, with all the
interest that you pay is about
49:26
double.
49:27
So you end up paying back
the bank about $1 million
49:29
on half a million owned.
49:31
So you have 30 years
of home ownership
49:35
at a cost of $1 million in debt.
49:38
That is probably a
good debt to take on,
49:41
because the value of the house
will increase over that time.
49:44
You're not paying
rent over that time,
49:46
and that million
dollars that you're
49:47
spending on this house
over those 30 years
49:50
is a good debt to take
on, because you're
49:51
getting greater than $1 million
worth of value out of it.
49:56
A bad debt would be an impulse
purchase on a high interest
49:58
credit card.
50:00
Those pair of shoes,
those latest ones
50:02
I really want to buy them.
50:03
It's $200.
50:04
By the time I've paid
them off, it's $500.
50:06
You're not getting $500 worth
of benefit out of those shoes.
50:10
Approaching software development
with the same mindset
50:13
is the right way to go.
50:15
Every time you build
something, you take on debt.
50:18
It doesn't matter how
good it is, there's always
50:21
going to be bugs.
50:21
There's always
going to be support.
50:23
There's always going to be
new requirements coming in
50:25
from people.
50:26
There's always going
needs to market it.
50:28
There's always going
needs for feedback.
50:29
All of these things are debt,
every time you do a thing.
50:33
The only way to avoid
debt is to do nothing.
50:35
So your mindset should
then get into when
50:37
you are creating a
thing, whether you're
50:40
coding it yourself or
whether you're vibe coding it
50:42
or any of these things
that you are increasing
50:44
your amount of technical
debt, those things
50:48
that you need to
pay off over time.
50:50
So the question
then becomes, as you
50:52
vibe code a thing into
existence in the same way
50:55
as buying a thing, is it
worth the technical debt
50:58
that you're taking on?
51:00
What does technical debt
generally look like?
51:02
Bugs that you need
to fix, people
51:04
that you need to convince to
help you maintain the code,
51:08
documentation that
you need to do,
51:10
features that you need to add,
all of these kind of things.
51:14
You're all very
familiar with them.
51:16
Think about those
as that extra work
51:18
that you need to do
beyond your current work.
51:20
That's the debt that
you're taking on.
51:22
There are soft debt,
and there are hard debt.
51:25
So to me, that would be the
number one piece of advice
51:28
that I give.
51:29
And it's the one that I give
every time I work with companies
51:32
around vibe coding.
51:33
And a lot of companies that I
speak with, a lot of companies
51:37
that I consult with--
51:38
I do a lot of work
with startups,
51:39
in particular-- they
just want to get straight
51:42
into opening Gemini
or GPT or Anthropic
51:45
and start churning code out.
51:47
Let's get to a prototype
phase very quickly.
51:50
Let's go to investors.
51:52
Let's do stuff.
51:53
It's great.
51:54
It can be.
51:55
But debt, debt, debt, debt, debt
is always going to be there.
51:59
How do you manage your debt?
52:00
A good financier manages their
debt and they become rich.
52:03
A good coder manages
their technical debt,
52:05
and they become rich also.
52:07
So how do you get the
good technical debt?
52:10
How do you the mortgage instead
of the high credit card debt?
52:13
Well, number one
is your objectives.
52:14
What are they?
52:15
Are they clear?
52:16
And have you met them?
52:18
You knew what you
needed to build.
52:19
You didn't just fire up ChatGPT
and start spinning code out.
52:23
At least I hope you didn't.
52:24
Think about how you build it.
52:26
AI was there to help
you build it faster.
52:28
I'm working on my
own little startup
52:31
at the moment in the
movie making space.
52:33
And I've been using code
generation almost completely
52:36
for that.
52:38
But what I've ended up doing
for my clear objectives
52:40
met box here is
that I've started
52:42
building this application.
52:44
I've tested it.
52:44
I've thrown it away.
52:45
I started again, tested
it, thrown it away.
52:47
Each time my requirements have
been improving in my mind.
52:51
I understand how to do the
thing a little bit better,
52:53
and I can show some of the
output of it in a few minutes.
52:56
But the idea there
is that it's always
52:58
about having those clear
objectives and meeting them.
53:00
And then if you're
building out the thing
53:02
and you're not meeting
those objectives,
53:04
that's still a learning.
53:05
And there's no harm
in throwing it away
53:06
because code is cheap now in
the age of generated code.
53:10
Finished code, engineered
code is not cheap.
53:13
So get those objectives,
make them clear,
53:16
build it, hit a specific
requirement and move on.
53:20
Is there business
value delivered?
53:22
Is the other part of it.
53:23
I've seen people vibe coding
for hours on things like Replit
53:27
to build a really,
really cool website.
53:29
And then the answer
was, so what?
53:32
I mean, how is this
helping the business?
53:33
How is this really
driving something?
53:35
It's really cool.
53:36
Yes, Mr. VP, I know you've
never written a line of code
53:38
in your life, and it's
really cool that you've built
53:40
a website now, but so what?
53:42
So think about that,
and focus on that.
53:45
And that's how you avoid
the bad technical debt.
53:47
And then, of course, the most
understated part of this,
53:51
and in some ways the most
important, particularly
53:53
if you're working
in an organization,
53:55
is human understanding.
53:56
The worst technical debt
that you can take on
53:59
is delivering code that
nobody understands.
54:02
Only you understand
that, and then
54:03
you quit and get a better job.
54:05
And then the company is
now dependent on that code.
54:08
So being able to, as
part of the process
54:11
of building it, to make sure
that your code is understandable
54:15
through documentation,
through clear algorithms,
54:17
through the fact that you've
spent some time poring
54:19
through it to make
sure that even
54:21
simple things like
variable names make sense
54:24
is a really, really important
way to avoid bad technical debt.
54:28
And that bad technical
debt, my favorite one
54:31
is the classic solution
looking for a problem.
54:33
Somebody has an idea.
54:34
Somebody has a tool.
54:36
If the only tool you
have is a hammer,
54:37
every problem looks like a nail.
54:39
And you end up having
all of these tools
54:42
that get vibe coded
into existence.
54:44
I've worked in
large organizations
54:46
where people just vibe coded
stuff, checked it into the code
54:48
base, and then it became really
hard to find the good stuff
54:51
amongst all the bad.
54:53
Spaghetti code.
54:53
Of course, poorly
structured stuff,
54:56
particularly when you prompt
and prompt and prompt and prompt
54:58
again, that it
can end up getting
55:01
into all kinds of trouble.
55:02
My favorite one at the moment
that I'm really struggling with
55:05
is I'm building a
macOS application.
55:08
Anybody ever build
in SwiftUI on macOS?
55:12
OK, a couple.
55:14
SwiftUI is the default
language that Apple
55:16
use for building for
macOS as well as iPhone.
55:19
But when you look
at the training set,
55:22
the data training sets that
are used to train these models,
55:25
the vast majority of the code
is iPhone code, not macOS code.
55:28
And when I prompt
code into existence,
55:30
it's often given me iOS APIs
and those kind of things.
55:35
Even though I'm in Xcode
and I've created a macOS app
55:38
and it's a macOS template and
I'm talking to it in Xcode,
55:41
it still gives me iOS
code, stuff like that.
55:43
And then if I try to
change it using prompting,
55:46
you end up spiraling
into spaghetti code,
55:49
and you have to end up changing
a lot of this stuff manually.
55:52
And then, of course,
the other one
55:53
that I joked about it
earlier, but it's also true,
55:56
is some of the
bad technical debt
55:58
that you're going to encounter
in the workspace is authority
56:01
over merit.
56:03
That VP suddenly took
out his credit card,
56:05
subscribed to Replit, and
started building stuff
56:08
in Replit.
56:09
And guess whose job
it is to fix it?
56:11
So a lot of the
advice that I start
56:15
giving companies and
a lot of the words
56:17
that I would encourage
you to start thinking
56:19
of in being a trusted advisor
is to understand this stuff
56:23
and to manage
expectations accordingly.
56:27
OK, so framework
for responsible vibe
56:29
coding we've just spoke about.
56:32
So one of the things I want to
get into as we're coming soon
56:35
to a close is the hype cycle.
56:37
So hype is the
most amazing force.
56:41
I mean, I think it's one
of the strongest forces
56:43
in the universe, and
particularly in anything
56:45
that's hot, such as the fields
that I work in that are super
56:49
hot at the moment and full
of hype or AI and crypto--
56:51
you should see my Twitter feed--
56:53
that the amount of nonsense
that's out there is incredible.
56:57
So one of the
things that I would
56:59
say about the anatomy
of hype that you really
57:01
need to think about is
if you are consuming
57:05
news via social media, that
the currency of social media
57:09
is engagement.
57:12
Accuracy is not the
currency of social media.
57:15
So I go on to--
even LinkedIn, which
57:18
is supposed to be the more
professional of these,
57:21
is absolutely overwhelmed with
influencers posting things
57:26
that they've used,
Gemini or GPT,
57:28
to write an engaging post so
that they can get engagement
57:32
and they can get likes.
57:33
And the engine itself is
engineered, excuse the pun,
57:37
to reward those types of posts.
57:39
And we end up with
that snowball effect
57:42
of engagement being rewarded.
57:44
If you are the
kind of person who
57:46
can filter the signal
from the noise,
57:49
and then who can encourage
others around the signal and not
57:53
the noise, that puts you
in a huge advantage that
57:56
makes you very distinctive.
57:58
It's not as quickly and
easily tangible as likes
58:01
and engagements on social media.
58:03
But when you're in a one-to-one
environment like a job
58:06
interview, or if
you are in a job
58:08
and you are bringing that
signal to the table instead
58:11
of the noise, that makes
you immensely valuable.
58:15
So coming in with
that mindset, coming
58:17
in with the idea
of trying to filter
58:20
that signal from the
noise, trying to understand
58:23
what is important in
current affairs, how
58:27
you can be a trusted
advisor in those things,
58:30
and how you can really whittle
down that noise to help someone
58:34
is immensely valuable.
58:35
I want to start with one story.
58:38
I might be stealing
my own thunder.
58:39
I'll go on to in a moment.
58:41
So one story.
58:43
Last year when agents
started becoming the key word
58:46
and everybody saying,
in 2025, agent
58:49
will be the word of
the year and the trend
58:51
of the year, a company
in Europe asked
58:55
me to help them to
implement an agent.
58:57
So let me ask you a question.
58:59
If a company came
up to you and said,
59:01
please help me
implement an agent,
59:04
what's the correct first
question that you ask them?
59:10
What is an agent for you?
59:12
OK.
59:12
That's good.
59:13
What is an agent for you?
59:14
I'd actually have a more
fundamental question.
59:17
Yep.
59:17
What do you want to do?
59:18
What do you want to do?
59:19
OK.
59:19
Even more fundamental.
59:21
My question was why?
59:24
Why?
59:25
And peel that apart.
59:27
I spoke with the CEO,
and he was like, oh.
59:30
Yeah, everybody's
telling me that I'm
59:31
going to save business costs.
59:33
And I'm going to be able
to do these amazing things.
59:36
And yeah, my business
is going to get
59:38
better because I have agents.
59:39
And I'm like, well,
who told you that?
59:41
It was like, oh, yeah, I
read this thing on LinkedIn,
59:43
and I saw this thing on Twitter.
59:45
And it was like--
59:45
and we ended up having
that conversation.
59:47
And it was a
difficult conversation
59:49
because I had to
keep peeling apart.
59:51
And I started
asking the questions
59:52
that you two just mentioned
as well, until we really
59:55
got to the essence of
what he wanted to do.
59:57
And what he really
wanted to do, when
59:59
we take all domain
knowledge about AI aside,
1:00:03
was that he wanted to make his
salespeople more efficient.
1:00:06
And I was like, OK, you want
to make your salespeople more
1:00:08
efficient.
1:00:09
Nowhere in that sentence
do I hear the word AI,
1:00:11
and nowhere in that sentence
do I hear the word agent.
1:00:14
So now, as a
trusted advisor, let
1:00:17
me see what I can do to
help your salespeople become
1:00:19
more efficient.
1:00:20
And I'm not going to be an
AI Shill or an agent Shill.
1:00:23
I just want to say, what do we
do to make your salespeople more
1:00:26
efficient?
1:00:27
If anybody here has
ever worked in sales,
1:00:29
one of the things you realize
what a good salesperson has
1:00:31
to do is their homework.
1:00:34
Before you have a sales
call with somebody,
1:00:36
before you have a sales
meeting with somebody,
1:00:38
you need to check
their background.
1:00:40
You need to check the company.
1:00:41
You need to check the
needs of the company.
1:00:43
You see it sometimes
in the movie that, oh,
1:00:45
such and such plays golf.
1:00:46
So I'll take them to play golf.
1:00:48
It's not really that cliched,
but there is a lot of background
1:00:51
that needs to be done.
1:00:52
So I spoke with him, and I spoke
with their leading salespeople
1:00:56
and found out that-- and
I asked the salespeople,
1:00:58
what do you hate
most about your job?
1:01:00
And they were like,
well, I hate the fact
1:01:02
that I have to waste
all my time going
1:01:04
to visit these company
websites, going
1:01:07
to look up people on LinkedIn.
1:01:09
And every website is
structured differently.
1:01:12
So I can't just have a
path through a website
1:01:16
that I can follow.
1:01:17
I have to take on all
this cognitive load.
1:01:19
And they were spending about
80% of their time researching
1:01:24
and about 20% of
their time selling.
1:01:26
Oh, and by the way,
most salespeople
1:01:28
don't get paid very much.
1:01:29
They have to make
it up by commission,
1:01:31
so they're only spending 20% of
their time doing the thing that
1:01:33
gets them commission directly.
1:01:35
So we're like, OK, well,
here's something now
1:01:37
where we can start thinking
about making them more efficient
1:01:40
by cutting into that.
1:01:41
So we set a goal is like to make
salespeople 20% more efficient.
1:01:45
And then we could start
rolling out the ideas of AI.
1:01:48
And then we could start rolling
out the ideas of agentic AI.
1:01:51
And a quick question
what's the difference
1:01:53
between AI and agentic AI?
1:02:00
OK.
1:02:01
So-- yeah.
1:02:03
Like a good AI can do some
[INAUDIBLE] a couple of steps.
1:02:07
OK.
1:02:07
[INAUDIBLE]
1:02:11
Yep.
1:02:11
Excellent.
1:02:12
Yeah.
1:02:12
So agentic AI is really
about breaking it down
1:02:14
into steps, which is good
engineering to begin with.
1:02:17
But agentic AI, in
particular, I find
1:02:20
there's a set pattern of
steps that if you follow them,
1:02:23
you end up with a
whole idea of an agent.
1:02:25
The first of these steps
is to understand intent.
1:02:29
We tend to use the words AI,
Artificial Intelligence, a lot.
1:02:32
But what large language models
are really, really good at
1:02:35
is also understanding.
1:02:36
So if the first step of
anything that you want to do
1:02:39
is to understand intent.
1:02:41
And you can use an LLM to
do that to think about this
1:02:44
is the task that I need to do.
1:02:45
This is how I'm going to do it.
1:02:46
Here's the intent.
1:02:47
I want to meet Bob Smith and
sell widgets to Bob Smith.
1:02:53
And this is what I
know about Bob Smith.
1:02:56
Help me with that intent.
1:02:58
The second part
then is planning.
1:03:01
So you declare to an agent
what tools are available to it,
1:03:04
browsing the web,
searching the web,
1:03:06
all of these kind of things.
1:03:08
And once you understand
your clear intent
1:03:10
to be able to go to the
step of planning and using
1:03:12
those tools for planning,
and an LLM is very, very good
1:03:15
at then breaking that
down into the steps
1:03:17
that it needs to do
to execute a plan.
1:03:19
Search the web with
these keywords.
1:03:21
Browse this website
and find these links,
1:03:24
those types of things.
1:03:25
Once it's then
figured out that plan,
1:03:27
then it uses the tools
to get to a results.
1:03:30
And then once it has the result,
the fourth and final step
1:03:32
is to reflect on that result.
And looking at the results
1:03:35
and going back to the intent,
did we meet the intent?
1:03:38
Yes or no.
1:03:38
If we didn't, then
go back to that loop.
1:03:40
All agent is really broken
down into those things.
1:03:43
And if you think about
breaking any problem down
1:03:45
into those four
steps, that's when
1:03:47
you start building an agent.
1:03:49
And that was part of
being a trusted advisor,
1:03:50
instead of coming in and
waving hands and saying,
1:03:53
agent this, agent that.
1:03:54
Look at this toolkit, save 20%.
1:03:56
It's really to break it
down into those steps.
1:03:58
Se we did.
1:03:59
We broke it down
into those steps.
1:04:01
We built a pilot for the
salespeople of this company,
1:04:04
and they ended up saving between
10% and 15% of their time,
1:04:09
of their wasted time.
1:04:10
The doctrine of
unintended consequences
1:04:13
hit, though, after this.
1:04:14
And the unintended consequence
was the salespeople
1:04:17
were much happier because the
average salesperson was making
1:04:21
several percentage points
more sales in a given week,
1:04:24
they were earning more
money in a given week,
1:04:27
and their job just became a
little bit less miserable.
1:04:30
And then refinement to
that agentic process,
1:04:32
to be able to do all of
that research for them
1:04:34
and to help give them a brief
in a few minutes instead
1:04:37
of a few hours to help them
with the sales process,
1:04:39
ended up being like a
win-win-win all around.
1:04:42
But if you go in
being hype led and oh,
1:04:44
build an agent for the thing
without really peeling apart
1:04:48
the business requirements,
the why, the what,
1:04:50
the how, and all of these kind
of things, we ended up like,
1:04:54
this company just would
have been lost in hype.
1:04:56
You've probably seen
reports recently.
1:04:58
I think McKinsey put one out
last week showing that about 85%
1:05:01
of AI projects at
companies fail.
1:05:05
And part of the
main reason for that
1:05:07
is that they're not well scoped.
1:05:08
People are jumping on
the hype bandwagon,
1:05:10
and they're not really
understanding their way
1:05:12
through the problem.
1:05:13
And I think you
know the big brains
1:05:15
in this room and the network
that you folks have are really
1:05:18
key component of
being able to succeed
1:05:21
is to understand your
way through that problem.
1:05:23
So that was a hype
example around agentic
1:05:26
that I was thankfully able
to help this company through.
1:05:29
Other recent hype examples
you've probably seen,
1:05:31
the software
engineering is dead.
1:05:33
My personal favorite, Hollywood
is dead or AGI by year end.
1:05:38
I was in Saudi Arabia
this time last year
1:05:41
at a thing called the FYI.
1:05:42
And it was a dinner
at the FYI, and I
1:05:44
sat beside the CEO of a company
who I'm not going to name,
1:05:48
but this was a CEO of a
generative AI company.
1:05:51
And at that time he
was showing everybody
1:05:53
around the table
this thing that he'd
1:05:55
done, where it
was text to video,
1:05:57
and he could put in a text
prompt and get video out
1:06:00
of the prompt and get about
six seconds worth of video
1:06:02
out of it.
1:06:03
A year ago, that was--
1:06:04
I beg your pardon,
two years ago.
1:06:06
Two years ago,
that was hot stuff.
1:06:08
Nowadays, obviously,
it's quite passé.
1:06:10
Anybody can do it.
1:06:11
But he made a comment
at that table,
1:06:13
and it was a lot of media
executives at that table
1:06:16
was like, by this time next
year, from a single prompt,
1:06:19
we'll be able to do
90 minutes of video.
1:06:21
And so bye-bye, Hollywood.
1:06:24
So the whole Hollywood is dead
meme, I think, came out of that.
1:06:27
First of all, we can't do 90
minutes, even two years later
1:06:30
from a prompt.
1:06:31
And even if you did,
what kind of prompt
1:06:33
would be able to tell you
a full story of a movie?
1:06:35
So this type of hype
leads to engagement.
1:06:39
This type of hype
leads to attention.
1:06:42
But my encouragement to
you is to peel that apart.
1:06:45
Look for the signal.
1:06:47
Ask the why question.
1:06:48
Ask what question and
move on from there.
1:06:52
So becoming that
trusted advisor.
1:06:55
World's drowning in hype.
1:06:57
How do you do it?
1:06:57
Look at the trends,
evaluate them objectively.
1:07:00
Look at the genuine
opportunities
1:07:02
that are out there.
1:07:04
There are fashionable
distractions.
1:07:05
I don't know what the
next one is going to be,
1:07:07
but there are these
distractions that
1:07:08
are out there that
will get you lots
1:07:10
of engagement on social media.
1:07:11
Ignore them, and
ignore the people
1:07:13
that are leaning into them.
1:07:15
And then really lean
into your skills
1:07:18
about explaining technical
reality to leadership.
1:07:23
One skill that one
person coached me
1:07:25
in once that I thought
was really interesting,
1:07:27
because it sounded wrong,
but it ended up being right,
1:07:30
was whenever you see
something like this,
1:07:32
try to figure out how to make
it as mundane as possible.
1:07:35
When you can figure out how to
make it as mundane as possible,
1:07:38
then you really begin
to build the grounding
1:07:41
for being able to explain
it in detail in ways
1:07:43
that people need to understand.
1:07:46
If you go and you
look at, I think
1:07:49
Gemini 3 was released
today, but there were leaks
1:07:53
earlier this week.
1:07:54
And one person leaked that
I built a Minecraft clone
1:07:57
in a prompt, that kind of stuff.
1:08:00
This is the opposite of mundane.
1:08:02
This was massively hyping
the thing, massively showing.
1:08:05
And of course, they didn't.
1:08:06
They built a flashy demo.
1:08:07
They didn't really
build a Minecraft clone.
1:08:09
But the idea here is if you
can peel that apart to OK,
1:08:12
how do I think about what
are the mundane things that
1:08:15
are happening here?
1:08:17
The one that I've been working
with a lot recently is video.
1:08:20
So text to video prompts,
as I've mentioned,
1:08:23
instead of the magical, you can
do whatever you want all nice
1:08:27
and fluffy Hollywood
is dead, what
1:08:29
is the mundane element
of doing text to Video
1:08:31
The mundane element
of doing text to video
1:08:33
is that when you train a model
to create video from a text
1:08:37
prompt, what it is
doing is it's creating
1:08:39
a number of successive frames.
1:08:41
And each of those
successive frames
1:08:43
is going to be slightly
different from the frame before.
1:08:46
And you've trained a model by
looking at video to say, well,
1:08:50
if in frame 1, the person's
hands like this and frame 2
1:08:52
it's like that,
then you can predict
1:08:54
it moves this way if
there's a matching prompt.
1:08:56
And suddenly it's become
a little bit more mundane,
1:08:58
but suddenly they
begin to understand it.
1:09:00
And then the people
who are experts
1:09:02
in that specific field, not
the technical side of it,
1:09:05
are now the ones
that will actually
1:09:06
be able to come up and do
brilliant things with it.
1:09:11
So that height
navigation strategy--
1:09:13
filter actively, go deep
on the fundamentals,
1:09:16
get your slides to work.
1:09:17
And then, of course, keep
your finger on the pulse.
1:09:19
The hardest part
of that, I think,
1:09:21
is the third one
is really keeping
1:09:22
your finger on the pulse.
1:09:23
And that's when you have to wade
into those cesspits of people
1:09:26
just farming
engagement and really
1:09:28
try to figure out the
signal from the noise there.
1:09:30
But I think it's really
important for you
1:09:32
to be able to do that, to be
connected, to understand that.
1:09:34
Reading papers is all very good.
1:09:36
The signal-to-noise ratio,
I think, in reading papers
1:09:38
is a lot better.
1:09:39
But to understand the
landscape that the people
1:09:41
that you are advising,
they are the ones
1:09:44
who are waiting in the cesspools
of Twitter and X and LinkedIn.
1:09:47
And there's nothing wrong
with those platforms
1:09:49
in and of themselves,
but the stuff that's
1:09:51
posted on those platforms.
1:09:54
So overall landscape, it
is ripe with opportunity,
1:10:00
absolutely ripe
with opportunity.
1:10:02
So I would encourage
you, as Andrew
1:10:04
did, to continue learning,
to continue digging
1:10:07
into what you can do and
to continue building.
1:10:09
But there are risks ahead.
1:10:12
Anybody remember
the movie Titanic?
1:10:16
Remember the famous phrase in
that, "iceberg right ahead"?
1:10:19
But immediately before that,
there's a scene in Titanic--
1:10:23
if we weren't being
filmed, I would show it,
1:10:25
but I can't for
copyright reasons-- where
1:10:27
the two guys up in the crow's
nest are freezing and talking.
1:10:31
And the crow's nest
at the top of the ship
1:10:33
is where the spotters would be
to spot any icebergs in front.
1:10:36
And go back and watch
the movie again.
1:10:38
You'll see the conversation
between these two guys
1:10:40
is that all they're talking
about is how cold they are.
1:10:43
And then it cuts
away to the crew
1:10:45
of the ship who are
like, wait, aren't they
1:10:47
supposed to have binoculars?
1:10:48
And then the crew is like,
oh, we left the binoculars
1:10:51
behind in port.
1:10:52
That framing the
whole idea was like,
1:10:55
they were so arrogant in
being able to move forward
1:10:58
that they didn't want to look
out for any particular risks.
1:11:00
And even though they
had people whose job it
1:11:02
was to look out for risks,
they didn't properly
1:11:04
equip or train them.
1:11:05
And that, to me, is a
really good metaphor
1:11:07
for where the AI
industry is today.
1:11:10
There are risks in front of us.
1:11:12
Those risks, the B
word, the bubble word
1:11:14
you're probably reading in
the news is there, are there.
1:11:17
To me, though, the opportunity
and the things to think about
1:11:24
in terms of a bubble are most
of you probably don't remember
1:11:28
dotcom bubble of the 2000s.
1:11:31
But if you think about
the dotcom bubble,
1:11:33
that was the biggest
bubble in history.
1:11:36
It bursts, but we're still here.
1:11:40
And the people who dotcom rights
not only survived, they thrived.
1:11:46
Amazon, Google,
they did it right.
1:11:49
They understood the
fundamentals of what
1:11:51
it was to build a dotcom.
1:11:52
They understood the
fundamentals of what it was
1:11:54
to build a business on dotcom.
1:11:56
And when the bubble of hype
burst, they didn't go with it.
1:11:59
There was one website, I
believe it was pets.com,
1:12:01
that they had the mindset of if
you build it, they will come.
1:12:06
They had Super Bowl
commercials around pets.com.
1:12:10
They couldn't handle the
traffic that they got.
1:12:12
And that was the kind of site
that when the bubble burst,
1:12:15
those were the sites
that just evaporated.
1:12:17
So that bubble in
AI is likely coming.
1:12:20
There is always a bubble.
1:12:22
So the companies that
are doing AI right
1:12:25
are the ones, like I
said, that won't just
1:12:27
avoid the bubble that they will
actually thrive post bubble.
1:12:33
And the people who are
doing AI right, the folks
1:12:37
in this room who are
thinking about AI
1:12:39
and how you bring
it to your company,
1:12:40
and the advice that you're
giving to your company
1:12:42
and leaning into
that in the right way
1:12:45
will also be the ones who not
only avoid getting laid off
1:12:48
in the bubble crashes, but will
be the ones who will thrive
1:12:52
through and after the bubble.
1:12:54
So anatomy of any bubble,
and what I'm seeing in the AI
1:12:57
one in particular, is
this kind of pyramid.
1:13:00
At the top is the hype that
I've been talking about.
1:13:02
At the bottom is
massive VC investment.
1:13:05
I'll be frank.
1:13:06
I'm already seeing
that drying up.
1:13:08
Once upon a time,
you could go out
1:13:10
with anything that
had AI written on it
1:13:12
and get VC investment.
1:13:13
Then you could go out and
do anything with an LLM
1:13:16
and get VC investment.
1:13:17
Now there are far,
far, far more cautious.
1:13:20
I've been advising
a lot of startups.
1:13:22
The amount that they're getting
invested is being scaled back.
1:13:27
The stuff that's being
invested in is changing.
1:13:30
And the second layer down,
massive VC investment
1:13:35
is already beginning to vanish.
1:13:37
Unrealistic valuations.
1:13:39
Companies that aren't
making money being valued
1:13:42
massively high.
1:13:43
We all who they are.
1:13:44
We're beginning to see those
unrealistic valuations being
1:13:47
fed off of that hype.
1:13:49
# #MeToo products, where
somebody does something,
1:13:51
and it's successful,
and everybody
1:13:53
jumps on the bandwagon.
1:13:54
We're also seeing
them everywhere.
1:13:56
We saw them throughout
the dotcom bubble.
1:13:59
And then right at the
bottom is that real value.
1:14:01
I probably shouldn't have
done the triangle like this.
1:14:04
It should be more an
upside down triangle.
1:14:06
Because the real
value here is small.
1:14:08
But I've vibe coded these
slides into existence.
1:14:11
So this is one of the
technical debt I took on.
1:14:14
But the real value there,
that kernel of value is there,
1:14:18
and the ones that build for that
will be the ones that survive.
1:14:22
So the direction that I see
the AI industry going in
1:14:28
and the direction that I would
encourage you to start thinking
1:14:31
about your skills in, is really
over the next five years,
1:14:33
there's going to
be a bifurcation.
1:14:36
I'm just going to
be ornery in how
1:14:38
I describe it as big and small.
1:14:40
Big AI will be what we see
today, with the large language
1:14:43
models getting bigger in the
desire to drive towards AGI.
1:14:48
The Geminis, the Claudes,
the OpenAIs of the world
1:14:52
are going to continue to
drive bigger, and bigger
1:14:54
is better in the mindset
of those companies
1:14:57
towards achieving AGI or towards
achieving better business value.
1:15:01
That's going to be one
side of the branch.
1:15:03
The other side of the branch
is I'm going to call it small.
1:15:05
We've all seen
open-source models.
1:15:08
I hate the term open source.
1:15:10
Let me call them open
weights or let me call them
1:15:12
self-hostable models
are becoming-- they're
1:15:15
exploding onto the landscape.
1:15:17
I read an article recently
about Y Combinator
1:15:20
that 80% of the
companies in Y Combinator
1:15:23
were using small models
from China in particular.
1:15:26
So the Chinese
models in particular
1:15:29
are doing really well,
probably because of
1:15:31
the overall landscape.
1:15:32
They're not leaning into the
large models the same way
1:15:34
as the West is.
1:15:36
I see that
bifurcation happening.
1:15:37
China, I think,
has that head start
1:15:39
on the small models
that may last.
1:15:41
It may not.
1:15:41
I don't know.
1:15:43
But the point is, we're heading
in that particular direction
1:15:45
of I'm going to call them
instead of big and small now,
1:15:48
models that are hosted on
your behalf by somebody else,
1:15:52
like a GPT or a
Gemini or a Claude,
1:15:54
or models that you can host
yourself for your own needs.
1:15:59
As this side has right
now is underserved,
1:16:03
this bubble may burst.
1:16:05
This one right now
is underserved.
1:16:06
And this bubble
will be later on.
1:16:09
And the major skills that I
can see developers needing
1:16:12
over the next two to three
years on this side of the fence
1:16:16
will be fine tuning.
1:16:18
So the ability to take
an open-source model
1:16:21
and fine-tune it for
particular downstream tasks.
1:16:25
Let me give one concrete
example of that I've personally
1:16:27
experienced.
1:16:28
I work a lot in Hollywood,
and I've worked a lot
1:16:30
with studios making movies.
1:16:33
And one studio in particular
I was lucky enough
1:16:36
to sell a movie to, it's
still in preproduction.
1:16:38
It'll probably be in
preproduction forever.
1:16:41
But one of the things I
learned as part of that process
1:16:44
was IP in studios
is so protected.
1:16:49
It's not even funny.
1:16:50
Go in Google for
James Cameron, who
1:16:52
created Avatar and
the lawsuits that he's
1:16:55
involved in of this person
who apparently sent him
1:16:58
a story many years
ago about blue aliens
1:17:00
and is now suing him
for billions of dollars
1:17:02
because obviously there
were blue aliens in Avatar.
1:17:06
That level of IP protection
in Hollywood is insane.
1:17:09
The opportunity with
large language models
1:17:12
is equally insane.
1:17:15
A lot of the focus is on large
language models for creation,
1:17:17
for storytelling, for
rendering and all that,
1:17:20
but actually the major
opportunity that they have is
1:17:22
actually for analysis to take
a look at synopses of movies
1:17:27
and find out what
works and what doesn't.
1:17:29
Why was this movie a
hit and this one wasn't?
1:17:32
What time of year was this
one released and it became
1:17:34
successful and this one wasn't?
1:17:36
And with a margin on
movies being razor thin,
1:17:39
that kind of analysis is huge.
1:17:40
But in order to do
that kind of analysis,
1:17:42
you need to share the
details of your movie
1:17:44
with a large language model.
1:17:45
And they will absolutely not
do that with GPT or Gemini
1:17:48
or whatever, because
they're now sharing
1:17:50
their IP with a third party.
1:17:52
Enter small models,
where they can self-host
1:17:55
their own small model and they
are getting smarter and smarter.
1:17:58
The 7B model of today is
as smart as the 50B model
1:18:02
of yesterday.
1:18:03
A year from now, the 7B model of
a year from now will be as smart
1:18:06
as the 300B model of yesteryear.
1:18:09
So they're moving in that
direction of building
1:18:13
using small self-hosted
models, which they can then
1:18:16
fine-tune on downstream tasks.
1:18:18
Similar with other
things where privacy
1:18:19
is important law offices,
medical offices, all
1:18:21
of those kind of things.
1:18:22
So those type of skills
are fundamentally
1:18:25
important going forward.
1:18:27
So that's the bifurcation that
I'm seeing happening in AI.
1:18:30
The sooner bubble I think is
in the bigger non self-hosted.
1:18:34
The later bubble is in
the smaller self-hosted.
1:18:36
But either way, for
you, for your career,
1:18:39
to avoid the impact of
any bubble bursting,
1:18:42
focus on the fundamentals.
1:18:44
Build those real solutions.
1:18:46
Understand the business
side, and most of all,
1:18:48
diversify your skills.
1:18:49
Don't be that one trick pony who
only knows how to do one thing.
1:18:53
I've worked with
brilliant people who
1:18:55
are fantastic at coding,
in particular API,
1:18:58
or particular framework.
1:18:59
And then the industry moved
on and they got left behind.
1:19:03
OK, so yeah, when bubbles burst,
that overall fallout kind of
1:19:07
spoken about it a
little bit already.
1:19:09
Funding evaporates, hiring
freezes become layoffs,
1:19:12
projects get canceled, and
talent floods the market.
1:19:14
Yeah.
1:19:14
Quick question from
the last slide.
1:19:17
[INAUDIBLE] I heard a lot
about how NVIDIA is hiring,
1:19:23
and they're very
specific about they
1:19:26
want people for very specific
problem that they have.
1:19:30
So they can require people to be
basically put out that one thing
1:19:34
that you're missing.
1:19:35
So how do you think-- how is
it more important to diversify
1:19:43
skills versus actually
focusing on, for example,
1:19:46
LLMs versus computer
vision or versus
1:19:49
very specific downstream task?
1:19:52
So I mean, I think so the
question was around NVIDIA
1:19:55
in particular or hiring
for a very specific, very
1:19:57
narrow scenario.
1:19:58
So then the question
is, how important
1:20:00
is it for you to
become an expert
1:20:01
in a narrow scenario versus
diversifying your skills?
1:20:04
I would always argue it's still
better to diversify your skills,
1:20:08
because that one narrow
scenario is only that one
1:20:11
narrow scenario, and you're
putting all your eggs
1:20:13
into one basket.
1:20:13
NVIDIA would be a fantastic
company to work for.
1:20:16
Nothing against them in any way.
1:20:17
But if you put all of your
eggs into that basket and you
1:20:20
don't get it, then what?
1:20:22
So I think the idea
of really being
1:20:24
able to-- if you are
passionate about a thing,
1:20:28
to be very deep in that
thing is very, very good.
1:20:31
But to only be able
to do that thing,
1:20:33
I think I would always
encourage to be diversified.
1:20:36
And when I say diversified,
you're saying LLMs or computer
1:20:39
vision or anything
like that, I think
1:20:41
I mean that's one part of it.
1:20:42
But it's like that knowledge of
models and how to use them to me
1:20:46
is a uni skill.
1:20:47
The diversification of skills
is breaking outside of that.
1:20:51
Also to be able to think, OK,
what about building applications
1:20:54
on top of these?
1:20:55
What does scaling an
application look like?
1:20:57
What does software engineering
in this case look like?
1:20:59
What about user experience
and user experience skills?
1:21:02
Because it's all very well to
build a beautiful application.
1:21:05
But if nobody can use it--
1:21:06
I'm looking at here
at Microsoft Office.
1:21:10
There's stuff like that
that's what I really
1:21:13
mean about diversifying beyond.
1:21:14
So even in that mono
example with NVIDIA,
1:21:17
to be able to break out of
that one particular example,
1:21:20
but to show skills in other
areas that are of value,
1:21:22
I think is really important.
1:21:26
OK.
1:21:27
As we're just running
a little bit-- so yeah,
1:21:29
I just wanted to--
1:21:30
I've gone into it a
little bit already,
1:21:32
but I'm a massive
advocate for small AI.
1:21:35
I really do believe small
AI is the next big thing,
1:21:38
because we're
moving into a world,
1:21:39
and this is part of the
job that I do at Arm,
1:21:42
is we're kind of moving into
a world of AI everywhere
1:21:44
all at once.
1:21:46
So there's a
traditional, and it's
1:21:47
interesting you just
brought up NVIDIA
1:21:49
because there's a
traditional conception
1:21:51
that compute platforms are CPU
plus GPU when it comes to AI.
1:21:55
But that's also changing--
1:21:57
CPU general purpose,
GPU specialists.
1:22:00
But for example,
in mobile space,
1:22:02
there's massive innovation
being done with the technology
1:22:05
called SME, Scalable
Matrix Extensions.
1:22:09
And what SME is
all about is really
1:22:11
allowing you to
bring AI workloads
1:22:13
and put them on the CPU.
1:22:15
The frontrunners in this are a
couple of Chinese phone vendors,
1:22:19
Vivo and Oppo, who've just
recently released phones
1:22:22
with SME-enabled chips.
1:22:24
And what's magical
about these is that, A,
1:22:26
they don't need to have a
separate external chip drawing
1:22:30
extra power, taking up
extra footprint space just
1:22:33
to be able to run AI workloads.
1:22:35
And B, the CPU, of course,
being a low power pulling thing,
1:22:38
being able to run AI
workloads on that,
1:22:40
they've been able to build
interesting new scenarios.
1:22:43
And if I talk about
one in particular,
1:22:45
there's a company called Alipay.
1:22:47
And Alipay had an
application where you would--
1:22:50
and we've all seen
these apps where
1:22:52
you can go through
your photographs,
1:22:53
and you can search for
a particular thing.
1:22:56
Places I ate sushi or something
along those lines and use
1:22:59
that to create a slideshow.
1:23:00
All of those require
a back end service.
1:23:03
So your photographs are hosted
on Google Photos or Apple
1:23:06
Photos or something like that.
1:23:08
And that back end
service runs the model
1:23:10
that you can search
against it and be
1:23:12
able to do the assembly of them.
1:23:14
What Alipay wanted to
do was like, say, there
1:23:16
are three problems with this.
1:23:17
Problem number one, privacy.
1:23:19
You have to share your
photos with a third party.
1:23:21
Problem number two, latency.
1:23:23
You got to upload those photos.
1:23:25
You got to send the thing.
1:23:26
You got to have the
back end do the thing,
1:23:28
and then you've got to download
the results from the thing.
1:23:30
And then number three is
building that cloud service
1:23:33
and standing that up
cost time and money.
1:23:36
So if they could move all of
this onto the device itself,
1:23:39
now the idea was they
could run a model
1:23:41
on the device that searches
the photos on the device.
1:23:44
You don't have the latency.
1:23:45
And business
perspective, they're
1:23:47
now saving the money on
creating this stand up service.
1:23:51
They now have AI running on CPU
in order to be able to do that.
1:23:54
Apple are also people
who've invested heavily
1:23:56
in this scalable
matrix extensions.
1:23:59
You see whenever
they talk about--
1:24:00
if you've ever watched a WWDC
or anything like that, when they
1:24:03
talk about the new A-series
chips and M-series chips,
1:24:06
about the neural cores and those
kind of things in them, that's
1:24:09
part of the idea.
1:24:10
So to think about breaking that
habit that we've gotten into,
1:24:15
where you need a GPU to be able
to do AI is part of the trend
1:24:18
that the world is heading in.
1:24:20
Apple are probably one
of the leaders in that.
1:24:22
I'm very, very bullish on
Apple and Apple Intelligence
1:24:24
as a result. And from the AI
perspective, seeing that trend
1:24:31
and following that vector to
its logical conclusion as models
1:24:36
are getting smaller embedded
intelligence getting everywhere
1:24:39
isn't a pipe dream.
1:24:40
It isn't sci-fi anymore.
1:24:41
It's going to be a
reality that we'll
1:24:43
be seeing very, very shortly.
1:24:44
So that idea of that
convergence of AI,
1:24:47
because of the ability of
smaller models getting smarter
1:24:50
and lower power devices
being able to run them,
1:24:53
we see that convergence hitting,
and I see massive opportunity
1:24:56
there.
1:24:58
So one last part and just going
back to agents for a moment,
1:25:01
I think the one
thing that I always
1:25:03
say is like a hidden part
of artificial intelligence
1:25:06
is really what I like to call
artificial understanding.
1:25:09
And when you can start using
models to understand things
1:25:12
on your behalf.
1:25:14
And when they understand
them on your behalf,
1:25:16
to be able to craft from that
understanding new things,
1:25:20
you can actually
develop superpowers
1:25:22
where you're far more
effective than ever before,
1:25:24
be that creating code or
creating other things.
1:25:26
I'm going to give one quick
demo just so we can wrap up.
1:25:30
And I was talking earlier
about generating video.
1:25:35
So this picture is-- oops.
1:25:42
Sorry.
1:25:42
The connection here is
not very good, I lost it.
1:25:45
So here we go.
1:25:47
This picture here is actually
of my son playing ice hockey.
1:25:50
And I took this picture,
and I was saying,
1:25:53
OK, I think I'm very
good at prompting.
1:25:56
And I wrote a nice prompt
for this picture to get him.
1:26:00
He's in the middle
of taking a slapshot.
1:26:02
He's got some beautiful
flex on his stick.
1:26:04
And I asked it like, OK, to
prompt him scoring a goal.
1:26:08
What do you think happened?
1:26:10
Should we watch?
1:26:12
Let's see if it works.
1:26:13
[VIDEO PLAYBACK]
1:26:18
[CROWD CHEERING]
1:26:20
[END PLAYBACK]
1:26:21
This was the wrong video, but
it still shows the same idea.
1:26:25
Because of poor prompting or
because of poor understanding
1:26:29
of my intent, if I talk
about it in agentic senses,
1:26:34
the arena that he was in,
which is a practice arena
1:26:36
and doesn't have any
people in it-- sorry.
1:26:38
Let me pause it.
1:26:41
If I just rewind to
here, if we look up
1:26:46
in this top right-hand
corner here,
1:26:48
this is basically where they
store all their garbage.
1:26:51
But the AI didn't know
that, had no idea of it.
1:26:53
So it assumed it
was a full arena,
1:26:55
and it started
painting people in.
1:26:57
And even though he shot a
mile wide, everybody cheers.
1:27:00
And somehow he has two sticks
in his hand instead of one,
1:27:03
and they forgot his name.
1:27:05
So I did not go through an
agentic workflow to do this.
1:27:09
I did not go through the steps
of, A, understand my intent.
1:27:13
B, once you
understand my intent,
1:27:15
understand the tools that
are available to you.
1:27:17
In this case, it's
Veo, and understand
1:27:19
the intricacies of using Veo.
1:27:21
Make a plan of how to use them.
1:27:23
Make a plan of how to
build a prompt for them,
1:27:25
and then use them
and then reflect.
1:27:27
So I've been advising a
startup that is working
1:27:32
on movie creation using AI.
1:27:34
And I want to show you a
little sample here of a movie
1:27:36
that we've been working on with
them, where the whole idea is
1:27:39
like, if you want to have
performances at a virtual actors
1:27:42
and actresses, you
need to have emotion.
1:27:45
You need to be able to
convey that emotion,
1:27:47
and you also need to be able to
put that emotion in the context
1:27:50
of the entire story.
1:27:52
Because when you create
a video from a prompt,
1:27:54
you're creating an
eight-second snippet.
1:27:56
That eight-second
snippet needs to know
1:27:58
what's going on in
the rest of the story.
1:28:00
So if I show this
one for a moment.
1:28:03
And it's a little
wooden at the moment,
1:28:06
it's not really
working perfectly.
1:28:08
I have professional
actors who are friends
1:28:10
who are advising me
on this, and they
1:28:12
laughed at the performances.
1:28:13
But try to view it
through the difference
1:28:16
that we had from an agentic
prompt with the hockey
1:28:19
player to this one.
1:28:20
[VIDEO PLAYBACK]
1:28:22
That's hopefully we can hear it.
1:28:33
- I guess I can do the
pub quiz after all.
1:28:40
They just shut me down.
1:28:42
I'm so close.
1:28:45
But they wouldn't listen.
1:28:48
- I won't--
1:28:49
[END PLAYBACK]
1:28:49
They never listen.
1:28:51
So here's the idea
of, again, just
1:28:54
thinking in terms of agentic,
as I was saying earlier on,
1:28:57
breaking it into those steps.
1:28:58
That allowed me to use
exactly the same engine,
1:29:01
as I was showing
you earlier on, that
1:29:02
fails to be able
to show something
1:29:04
that works and is able to do
things like portraying emotion
1:29:07
that I just spoke about.
1:29:09
So I know we're a
little bit over time.
1:29:11
So sorry about that.
1:29:12
I can take any questions
if anybody has any.
1:29:14
I see Andrew is here as well.
1:29:15
He's at the back.
1:29:16
And I just really
want to say thank you
1:29:18
so much for your attention.
1:29:19
I really appreciate it.
1:29:21
[APPLAUSE]
1:29:28
Yep.
1:29:29
How much of this new
generation [INAUDIBLE]
1:29:34
relation with the agentic
[INAUDIBLE] use case
1:29:38
is improved with the
agentic workflow?
1:29:40
And how much of it is
a training set bias
1:29:43
where you might
have only pictures
1:29:48
or videos with [INAUDIBLE]
that are full of [INAUDIBLE]
1:29:53
Yeah, it's a great question.
1:29:55
Just to repeat for the video,
how much of the improvement
1:29:58
is from the use of
an agentic workflow
1:30:00
versus just lack of hockey
stuff in the training set
1:30:03
for the failed one?
1:30:06
Not comparing like to,
so just using my gut.
1:30:09
When I looked at when I broke
this down into the workflow that
1:30:12
said, OK, I created
scenes like this one
1:30:14
and they were awful when I
just did it directly for myself
1:30:18
with no basis, no agentic,
no artificial understanding.
1:30:22
And when I broke it down into
the steps where it's like, OK,
1:30:25
in this scene, the girl
is sitting on the bench,
1:30:28
and she's upset.
1:30:30
And the person is talking to
her and he wants to comfort her.
1:30:34
Feeding that to a
large language model
1:30:38
along with the entire
story and along
1:30:40
with the constraints that
I had, where the shot
1:30:43
has to be eight seconds
long, clear dialogue
1:30:45
and all of those kind
of things, and then
1:30:47
to understand my
intent from that one,
1:30:50
the LLM ended up
expressing a prompt that
1:30:53
was far more loquacious
than I ever would have,
1:30:57
that was far more descriptive
than I ever would have.
1:30:59
The LLM had
understanding of what
1:31:01
makes a good shot, what
makes a good angle, what
1:31:03
makes good emotion far
more than I would have.
1:31:06
I could spend hours
trying to describe it.
1:31:08
So that first step
in the agentic flow
1:31:10
of it doing that for me
and understanding my intent
1:31:13
was huge.
1:31:14
The second step then is the
tools that it's going to use.
1:31:17
So I explicitly said which video
engine I'm going to be using.
1:31:20
I was using Gemini as the
LLM, and hopefully Gemini
1:31:22
is familiar with Veo,
that kind of stuff,
1:31:25
so to understand the
idiosyncrasies of doing things
1:31:27
with Veo.
1:31:28
What I learned, for
example, Veo was
1:31:30
very bad at doing
high-action scenes,
1:31:33
but is very good at doing slow
camera pulls to do emotion,
1:31:36
as you saw in this case.
1:31:38
So the LLM knew that
from me, declaring
1:31:40
I was using that as a tool.
1:31:41
And then further
it built a prompt
1:31:43
and then further refined
the prompt from that.
1:31:45
And then the third part
actually using the tool
1:31:47
to actually generate it
for me, generating a video
1:31:50
with something like Veo costs,
I think, between $2 and $3
1:31:53
to generate four
videos and credits.
1:31:55
So the last thing
I want to do is
1:31:57
generate lots and lots and
lots and lots of videos
1:31:59
and throw good money after bad.
1:32:01
But all of that token
spend that I did earlier on
1:32:04
to understand my intent and
then to make the plan for using
1:32:07
the agent was saved in the
back end where it got it right.
1:32:10
Maybe not get it
right first time,
1:32:13
but it would very rarely take
more than two or three tries
1:32:15
to get something that
was really, really nice.
1:32:17
So I think without
comparing like with like, I
1:32:21
do think that plan of action and
going through a workflow, that
1:32:24
worked very, very well.
1:32:27
Any other questions,
thoughts, comments?
1:32:32
Yeah, up at the back.
1:32:34
What has surprised you
the most about the AI
1:32:37
industry over the years?
1:32:39
What has surprised me
the most about the AI
1:32:41
industry over the years?
1:32:43
Oh, that's a good one.
1:32:45
I think what has
surprised me the most,
1:32:48
and it probably shouldn't
have surprised me,
1:32:50
is how much hype took over.
1:32:53
I actually-- I honestly
thought a lot of people
1:32:56
who are in important
decision making roles
1:32:58
and that kind of thing would be
able to see the signal better
1:33:01
than they did.
1:33:03
And I think the other part
was that the desire to make
1:33:09
immediate profits as
opposed to long-term gains
1:33:13
also surprised me a lot.
1:33:14
Let me share one story in that
space was one of the things
1:33:18
that after Andrew and I
taught that the TensorFlow
1:33:22
specializations on Coursera,
and after that, Google
1:33:25
launched a professional
certificate
1:33:28
where the idea of this
professional certificate
1:33:30
was would give a rigorous exam.
1:33:32
And at the end of
the rigorous exam,
1:33:33
if you got the certificate,
it was a high prestige thing
1:33:38
that would help you find
work, and particularly
1:33:40
at the time when TensorFlow was
a very highly demanded skill
1:33:43
in order to get work.
1:33:45
Running that program cost
Google $100,000 a year.
1:33:49
Drop in the bucket,
not a lot of money.
1:33:52
The goodwill that came
out of it was immense.
1:33:56
I can tell you--
1:33:57
I'll tell one story very
quickly, was a young man
1:34:01
and he went public
in some advertising
1:34:03
stuff that with Google
that he lived in Syria.
1:34:08
And we all know there was
a huge civil war in Syria
1:34:10
over the last few years.
1:34:12
And he got the
TensorFlow certificate.
1:34:14
He was one of the first
in Syria to get it,
1:34:16
and it lifted him
out of poverty,
1:34:18
where he was able
to move to Germany
1:34:21
and get work at a
major German firm.
1:34:23
And I met him at an
event in Amsterdam
1:34:25
where he told me his story.
1:34:27
And now, because of the job
that he had in this German firm,
1:34:31
he's able to support
his family back home
1:34:34
and move them out
of the war torn zone
1:34:36
into a peaceful zone all
because he got this AI thing.
1:34:41
And there were countless
stories like that.
1:34:44
Very inspirational,
very beautiful stories.
1:34:47
But the thing that
surprised me then
1:34:48
was sometimes the
lack of investment
1:34:50
in that, where there was
no revenue being generated
1:34:53
for the company out of that.
1:34:54
We deliberately kept
it revenue neutral so
1:34:57
that the price of the
exams could go down.
1:34:59
We wanted it to self-sustain.
1:35:01
It ended up not being
revenue neutral.
1:35:03
It ended up costing the company
about $100,000 to $150,000
1:35:06
a year.
1:35:06
So they canned it.
1:35:08
And it's a shame because of
all the potential goodwill
1:35:10
that can come out of
something like that.
1:35:12
But I think those
were the two that
1:35:13
immediately jump to mind that
have surprised me the most.
1:35:16
And then I guess one other
part that I would say
1:35:19
is the people who've been able
to be very successful with AI,
1:35:24
who you wouldn't think
would be the ones that
1:35:26
would be successful with AI, has
always been inspirational to me.
1:35:29
So allow me one more story.
1:35:32
I have a good friend.
1:35:32
I showed ice hockey
a moment ago.
1:35:34
I have a good friend who is a
former professional ice hockey
1:35:37
player.
1:35:38
And any ice hockey fans Here
1:35:40
It's a brutal sport.
1:35:43
You see a lot of fighting and
a lot of stuff on the ice.
1:35:46
And he dropped out of school
when he was 13 years old
1:35:48
to focus on skating.
1:35:50
And he will always
tell everybody
1:35:52
that he's the dumbest person
alive because he's uneducated.
1:35:55
He and I are complete opposites.
1:35:56
That's why we get on so well.
1:35:59
And he retired from ice hockey
because of concussion issues.
1:36:03
And he now runs a nonprofit--
1:36:05
the ice rinks for nonprofit.
1:36:08
And about three years ago,
we were having a beer,
1:36:11
and he was like, so
tell me about AI.
1:36:13
And tell me about
this ChatGPT thing.
1:36:15
Is it any good?
1:36:16
And I was like, just
sharing the whole thing.
1:36:18
Yes, it's good and all
that kind of stuff.
1:36:19
And it was obviously a loaded
question, and I didn't know why.
1:36:22
But part of his job
at his nonprofit
1:36:25
is that every quarter,
he has to present
1:36:27
to the board of directors
the results of the operations
1:36:30
so that they can
be funded properly,
1:36:31
because even though
they're nonprofit,
1:36:33
they still need
money to operate.
1:36:35
And he was spending upwards
of $150,000 a year to bring
1:36:40
in consultants to pull the
data from all of the different
1:36:44
sources.
1:36:45
They're pulling data
from-- there's machines
1:36:47
in what's called the pump room
that has a compressor that
1:36:49
cools the ice.
1:36:50
And there were spreadsheets
and there was accounts
1:36:52
and all this kind of stuff.
1:36:53
And he was not tech
savvy in any way.
1:36:56
But he needed to
process all this data.
1:36:59
So he did an experiment where
he got ChatGPT to do it.
1:37:02
And this was the
loaded question,
1:37:03
asking me if it was any good.
1:37:05
And so we talked
through it a little bit.
1:37:06
And then he told me why.
1:37:08
And so I took a
look at the results
1:37:10
because he was
uploading spreadsheets.
1:37:11
He was uploading PDFs and
all this kind of thing
1:37:13
and getting it to
assemble a report.
1:37:15
And it takes him about two
hours to do the report himself
1:37:18
with ChatGPT.
1:37:19
And it worked, and it
worked brilliantly.
1:37:22
And that $150,000 a year that
he's saving on consulting is now
1:37:25
going to underprivileged
kids for hockey equipment,
1:37:29
for ice skating
equipment, for lessons,
1:37:31
and all of that kind of thing.
1:37:32
So it was taken out of the
hands of an expensive consulting
1:37:34
company and put into
the hands of people.
1:37:37
Because of this guy,
and he says he's
1:37:38
the dumbest person alive, but--
1:37:40
I hope he's not
watching this video.
1:37:44
And I told him afterwards
that, congratulations, you're
1:37:47
now a developer.
1:37:48
And he didn't like that.
1:37:51
But it's like surprises like
that the superpowers that were
1:37:55
handed to somebody like him,
that he's not technical in any
1:37:58
way, but he was able to
effectively build a solution
1:38:01
that saved his nonprofit
$100,000 or $150,000 a year.
1:38:05
And things like that
are always surprising me
1:38:07
in a very pleasant way.
1:38:12
Yep.
1:38:12
Sorry.
1:38:13
I'll get to you next.
1:38:14
Sorry.
1:38:14
Yeah.
1:38:15
For engineers like us, it's
easier to navigate the hype
1:38:20
because we can understand what
the signal is from a research
1:38:24
paper.
1:38:25
But how about people who doesn't
have this knowledge, like,
1:38:30
from humanities or
something [INAUDIBLE]?
1:38:36
Yeah, so just to repeat
the question for the video.
1:38:38
For engineers like
us, sometimes it's
1:38:39
easy to navigate the hype to
see the signal from the noise.
1:38:42
But what about people who don't
have the same training as us?
1:38:45
I think that's our opportunity
to be trusted advisors for them
1:38:49
and to really help them
through that, to understand it.
1:38:53
I think the biggest
part in the hype story
1:38:55
right now is just understanding
the reward mechanism.
1:38:59
That everything rewards
engagement rather than
1:39:01
actual substance.
1:39:03
And to me, step one is
seeing through that.
1:39:05
The story I just
told about my friend,
1:39:08
he'd seen all this
kind of stuff,
1:39:10
but he wasn't willing
to bet his career on it.
1:39:12
But he needed that
kind of advice
1:39:14
around it and to start
peeling apart what he had done
1:39:16
and what he did right
and what he did wrong.
1:39:18
And so that positioning
ourselves to be trusted advisors
1:39:23
by not leaning into
the same mistakes
1:39:24
that the untrained people
may be leaning into,
1:39:27
I think is the key to that.
1:39:29
And just understanding that
the average person is generally
1:39:32
very intelligent,
even if they may not
1:39:35
be experts in a specific
domain, and to key
1:39:37
in on that intelligence and help
them to foster and to grow that
1:39:41
in and navigate them
through the parts
1:39:44
where they'll have
difficulty and let them
1:39:46
shine in what they're
very, very good at.
1:39:49
Over here there was one.
1:39:51
I have a question more
for AI and machine
1:39:53
learning for
scientific research.
1:39:55
OK.
1:39:56
Which is something that
is very hard [INAUDIBLE]
1:39:59
to get your perspective on.
1:40:01
Where do you think
that is a good idea
1:40:03
and where you might
say, maybe be cautious?
1:40:06
So AI and machine learning
for scientific research,
1:40:09
where is it a good idea and
where should you be cautious?
1:40:14
Ooh.
1:40:16
My initial gut check would be I
think it's always a good idea.
1:40:20
I think there was no harm in
using the tools that you have
1:40:23
available to you, but
to always to just double
1:40:26
check your results and double
check your expectations
1:40:29
against the grounded reality.
1:40:31
I've always been a fan of
using automation in research
1:40:36
as much as possible.
1:40:37
My undergraduate was physics
many, many years ago,
1:40:40
and I was actually very
successful in the lab
1:40:42
because I usually automated
things through a computer
1:40:44
that other people did
handwriting and pen and paper
1:40:47
with.
1:40:48
So I could move quickly.
1:40:49
So I know I'm biased
in that regard.
1:40:51
But I would say, for most
research, for the most part,
1:40:54
I think use the most powerful
tools you have available,
1:40:57
but check your expectations.
1:41:03
Little story actually on that
side was trivia question.
1:41:07
Poorest country
in Western Europe.
1:41:10
Anybody know?
1:41:11
Serbia?
1:41:12
What's that?
1:41:12
Or Western.
1:41:13
Western Europe is Wales.
1:41:16
So I actually did my
undergraduate in Wales,
1:41:19
and I went back to do some
lectures in the university
1:41:22
there.
1:41:23
And I met with a
researcher there,
1:41:26
and he was doing research
into brain cancer
1:41:29
using computer imagery and
using various types of computer
1:41:32
imagery.
1:41:32
And I asked him, well,
what's the biggest
1:41:34
problem that you have?
1:41:35
What's the biggest
blocker for your research?
1:41:38
And this is about
eight years ago.
1:41:39
And his answer was
access to a GPU.
1:41:43
And because for him to be
able to train his models
1:41:46
and run his models, he needed
to be able to access a GPU.
1:41:50
And the department
that he was in
1:41:52
had one GPU between
10 researchers,
1:41:55
which meant that everybody
got it for half a day.
1:41:57
Monday through Friday,
and his half a day
1:41:59
was Tuesday afternoon.
1:42:00
So in his case, he would
spend the entire time
1:42:02
that wasn't Tuesday afternoon
preparing everything
1:42:05
for his model run or
his model training
1:42:07
or everything like that.
1:42:08
And then Tuesday afternoon,
once he had access to the GPU,
1:42:11
then he would do the training.
1:42:12
And then he would
hope in that time
1:42:14
that he would train his model
and he would get the results
1:42:16
that he wanted.
1:42:17
Otherwise, he'd have to wait a
week to get access to the GPU
1:42:20
again.
1:42:21
And then I showed
him Google Colab.
1:42:23
Anybody ever used Google Colab?
1:42:25
And you can have
a GPU in the cloud
1:42:27
for free with that
kind of thing.
1:42:29
And the poor guy's
brain melted that--
1:42:32
because I took out my
phone, and I showed him
1:42:34
a notebook running on
my phone in Google Colab
1:42:37
and training it on that.
1:42:38
And it changed everything
for him research wise.
1:42:41
And now it was a case of--
and this was with Colab.
1:42:44
He had much more than he
had with his shared GPU.
1:42:46
So I think for someone
like him, machine learning
1:42:49
was an important
part of his research,
1:42:51
but he was so gated on it that
the ability to widen access
1:42:55
to that ended up really,
really advancing his research.
1:42:57
I don't know where it ended up.
1:42:59
I don't know what he has done.
1:43:00
It has been a few
years since then.
1:43:01
But that story just came to mind
when you asked the question.
1:43:06
Any more questions?
1:43:09
Feel free to ask me anything.
1:43:14
Oh, yeah.
1:43:14
At the front here.
1:43:15
It's more of a general question.
1:43:17
You talked about AI helping
food and beverage use.
1:43:21
What do you think AI would
be a force of social equality
1:43:25
or social inequality?
1:43:27
So can AI be a force of social
equality or social inequality?
1:43:31
I think the answer
to that is yes.
1:43:34
It can be both, and
it can be neither.
1:43:37
I mean, I think that
ultimately, the idea
1:43:39
is that if in my opinion, any
tool can be used for any means,
1:43:45
so the important thing is to
educate and inspire people
1:43:48
towards using things
for the correct means.
1:43:51
There's only so much
governance can be applied.
1:43:53
And sometimes governance
can cause more problems
1:43:56
than it solves.
1:43:58
So I always love to live my
life by assuming good intent
1:44:03
but preparing for bad intent.
1:44:05
And in the case of
AI, I don't think
1:44:07
there's any difference there
that everything that I will do
1:44:09
and everything that I would
advise is assuming good intent,
1:44:12
that people would use
it for good things,
1:44:14
but also to be prepared
for it to be misused.
1:44:18
The bad examples that I
showed earlier on, I think
1:44:20
were good intent
rather than bad intent.
1:44:24
And most mistakes
that I see that are
1:44:26
good intent being
used mistakenly as
1:44:29
opposed to bad intent.
1:44:30
But I would say that's the
only mantra that I can--
1:44:33
the only advice that I can give
and that kind of thing is always
1:44:35
assume good intent, but
prepare for bad intent.
1:44:40
The AI itself has no choice.
1:44:42
It's how people use it.
1:44:46
Andrew, did you want
closing comments or--
1:44:49
I think we were running
out [INAUDIBLE] time.
1:44:53
But thank you for this.
1:44:55
Really great.
1:44:56
Thanks, everyone,
for all the questions
1:44:57
on those creative solutions.
1:45:00
All right.
1:45:00
Thank you, Andrew.
1:45:01
Thanks.
1:45:01
[APPLAUSE]
— end of transcript —
Advertisement
More from Stanford Online
1:11:40
Stanford CS25: V2 I Introduction to Transformers w/ Andrej Karpathy
Stanford Online
1:49:54
Stanford CS230 | Autumn 2025 | Lecture 8: Agents, Prompts, and RAG
Stanford Online
1:02:52
Stanford CS231N Deep Learning for Computer Vision | Spring 2025 | Lecture 1: Introduction
Stanford Online
1:44:31
Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
Stanford Online