Dom Heinrich is a leader in design, AI, and sustainability, with two decades of experience leading global teams. He has excelled in delivering AI-powered products and services, emphasizing sustainability. Notably, Dom invented products aiding mobility for Parkinson's patients and contributed to GPT-1 development with Microsoft and Coca-Cola, showcasing his impact in AI. As a PRATT University faculty member, he lectures on AI in Digital Design and Design Business, blending deep problem-solving with elegant solutions. His work with industry giants underscores a commitment to technological innovation and a sustainable, inclusive future.
Dom on LinkedIn - https://bit.ly/48sOcMO
Dom Heinrich, transitioning from finance to AI and product innovation, catalysed a transformative journey beyond mere algorithms to impactful human-centric technology, notably for Parkinson's disease. As a luminary at the Creative AI Academy and TLGG Consulting, he unravels AI's complexities in product management and the myths of self-aware machines, guiding us through the potential of language processing to revolutionise human-machine interactions. This episode explores AI's capability to enhance innovation, streamline agile methodologies, and synergise design, product, and tech teams, emphasising personalised communication and strategic refinement.
The conversation also delves into the ethical dimensions of AI, reflecting on personal stories like technology's potential benefits for individuals with disabilities. It covers the ethical challenges in agile environments, the importance of inclusivity in design, and the interplay between human judgment and algorithmic advice. Highlighting how AI tools can transform perspectives, improve communication, and induce positive behavioural changes, the episode illustrates Dom Heinrich's vision for AI's role in evolving the essence of innovation and our digital future.
Key Highlights:
🔍 3:17 - Beyond AI: The Human Edge in Innovation and Healing
🔍 8:50 - Breaking Silos: Collaboration In AI
🔍 15:20 - Empowering The Quiet Voice: Workshop Facilitation
🔍 22:24 - Mastering Leadership: Balancing Innovation & Ma
Host Bio
Ben is a seasoned expert in product agility coaching, unleashing the potential of people and products. With over a decade of experience, his focus now is product-led growth & agility in organisations of all sizes.
Stay up-to-date with us on our social media📱!
Ben Maynard
🔗 https://www.linkedin.com/in/benmaynard-sheev/
Product Agility Podcast
🔗 https://www.linkedin.com/company/productagilitypod/
💻 https://productagilitypod.co.uk/
🖇️ https://linktr.ee/productagility
Listen & Share On Spotify & iTunes
- Spotify - https://open.spotify.com/show/0lkwAYJzVSuk5zfJ1vIDZq?si=4c691fb12f124a56
- iTunes - https://apple.co/3YvTX8p
Want to come on the podcast?
Want to be a guest or have a guest request? Let us know here https://bit.ly/49osN80
Dom: I invented products that help people to walk again with
00:00:03
Parkinson's disease.
00:00:04
I had the pleasure to work on GPT-1 with Microsoft and
00:00:08
Coca-Cola, so I've been in that space and race since a while.
00:00:13
I've seen it and can confidently claim that none of
00:00:17
the things we see out there is really an artificial
00:00:19
intelligence.
00:00:21
Ben: Welcome to the Product Adility Podcast, the missing
00:00:23
link between agile and product.
00:00:25
The purpose of this podcast is to share practical tips,
00:00:28
strategies and stories from world-class thought leaders and
00:00:32
practitioners why, I hear you ask.
00:00:35
Well, I want to increase your knowledge and your motivation to
00:00:38
experiment so that together, we can create ever more successful
00:00:42
products.
00:00:42
My name is Ben Maynard and I'm your host.
00:00:45
What has driven me for the last decade to bridge the gap
00:00:49
between agility and product is a deep-rooted belief that people
00:00:52
and products evolving together can achieve mutual insight, and
00:00:55
I'm going to talk about the global AI leader, tom Heinrich.
00:00:58
Hello everyone, we're here at the Product Adility Podcast once
00:01:00
again, and we are joined this week by a global AI leader,
00:01:05
nonetheless, dom Heinrich.
00:01:07
Now, dom is a global AI leader.
00:01:09
He is an innovation executive at TLGG Consulting and the
00:01:16
co-founder, no less, of the Creative AI Academy, and these
00:01:21
are all things that we're going to be learning a little bit more
00:01:22
about in the next half an hour to 40 minutes.
00:01:24
Thank you so much for making this time very early for you.
00:01:29
So you're in New York, is that correct?
00:01:32
Dom: That is correct.
00:01:33
Thanks for having me, ben.
00:01:34
It's early, but then not early.
00:01:36
I'm an early bird, so it's absolutely fine.
00:01:38
Six, oh, it's eight already, no way.
00:01:41
Ben: Not early, not that early no.
00:01:44
I'm already on my second coffee .
00:01:46
Oh my God, you're so lazy, Dom Jesus yeah.
00:01:50
Dom: Oh, don't tell my mom, I will get really mad with me.
00:01:54
Ben: I'm not even going to make a mom joke, then We'll move on.
00:01:57
Please, please, move on Now.
00:02:01
Dom and I may seem over familiar during this episode.
00:02:04
Just as a disclaimer.
00:02:05
We only met last week, but we seem to get along okay, so I've
00:02:10
got high hopes for this conversation, dom.
00:02:12
Now, dom, just to set the scene somewhat for our listeners,
00:02:16
could you introduce yourselves and turn a little bit about your
00:02:20
background, your esteemed career and maybe how you ended
00:02:22
up being a global AI leader?
00:02:25
Dom: That's a great question, ben.
00:02:26
I started my career in finance about 25 years ago because my
00:02:32
parents apparently thought that working in a bank is a great
00:02:35
idea for a creative person, and that didn't last long.
00:02:39
As you can imagine, I founded my own company or my own agency,
00:02:43
very much focusing on experiences and technologies,
00:02:46
very early on.
00:02:47
So in the early 2000s I sold it and didn't get rich just to
00:02:52
claim that, to clarify that not every exit is a good exit, but
00:02:56
that made me.
00:02:57
Actually, that was good because it made me to the career path,
00:03:00
and today, in the last 10 years, I've built up the global
00:03:04
innovation team for a large organization, large agency,
00:03:09
really focusing on product and services and ventures in the
00:03:12
technology and AI space, and so sustainability was a big part of
00:03:16
that as well.
00:03:17
I invented products that help people to walk again with
00:03:20
Parkinson's disease.
00:03:22
I had the pleasure to work on GPT-1 with Microsoft and
00:03:26
Coca-Cola, so I've been in that space and race since a while.
00:03:30
I've seen it and can confidently claim that none of
00:03:34
the things we see out there is really an artificial
00:03:37
intelligence.
00:03:37
Believe it or not, the humans are still more intelligent than
00:03:41
the machines.
00:03:42
Ben: I was wondering over lunch with my son earlier today, if he
00:03:46
was to ask me what is artificial intelligence, dad, do
00:03:49
you know?
00:03:50
I think I would have said something on the lines of well,
00:03:53
it's just a computer program that seems to be able to think
00:03:56
for itself at times, and I thought that's probably a really
00:03:58
crap explanation.
00:04:00
So maybe at some point in this conversation, dom, you can give
00:04:03
me a better answer for my son.
00:04:04
Maybe that is not a decent place to start, actually not a
00:04:10
decent place to start.
00:04:11
Just as to set the scene, I think a lot of people talk about
00:04:14
AI and I think a lot of the listeners will be using things
00:04:17
like chat, GPT-1, with all these terms bounded around.
00:04:19
And before we get into really the meat of this topic, this
00:04:23
conversation for today, which will be around innovation and AI
00:04:26
and around how we can deal with some of the challenges that we
00:04:29
find in organizations that are using products and agile with
00:04:32
design functions, maybe could you give us that short little
00:04:36
pitch as to exactly how you would describe AI.
00:04:40
Dom: Yeah, I would love to.
00:04:41
I mean none of what we see out.
00:04:42
There is AI.
00:04:43
There is no artificial intelligence, it's just an easy
00:04:47
term to use for all of us.
00:04:48
And the machine doesn't think itself right.
00:04:51
It sits there, it waits until you prompt it.
00:04:53
It's not sitting around and thinking what will Ben ask me
00:04:57
today?
00:04:57
It's great, it doesn't think.
00:04:59
So imagine, ben, maybe it would be beneficial for a lot of us
00:05:03
humans sometimes not to think but to answer your question.
00:05:07
Imagine it's a very great, a very great mathematical
00:05:11
algorithm that can predict words , and the better your words that
00:05:15
you are inputting in the machine, the better the output
00:05:18
will be, because it understands the context of what you're
00:05:22
putting in.
00:05:22
It's then calculating what the most likely answer is, and it's
00:05:26
becoming so good because it has a large amount of data and it
00:05:31
basically went to school for many hundred years and learned
00:05:34
everything that's out there which we may be able or, as we
00:05:38
said earlier, ben, maybe too lazy for.
00:05:41
Ben: Thank you very much for that.
00:05:42
That is very useful.
00:05:43
That's very useful and it satisfies my curiosity and I
00:05:46
will be able to explain to my child much better, because I did
00:05:48
think that it doesn't really think for itself and it is
00:05:50
waiting for a prompt.
00:05:54
Dom: So we can be very lucky that they are still not thinking
00:05:56
.
00:05:56
I mean, they're aiming for that right Companies like Meta, like
00:06:00
OpenAI.
00:06:01
They're aiming for the so-called artificial general
00:06:04
intelligence, which means the machine would be able to think.
00:06:06
I'm not sure we want that.
00:06:09
We're already confused by human thinking.
00:06:12
How do we handle artificial thinking in the future?
00:06:14
But in simple terms, just keep in mind, when you work with
00:06:18
these machines, they understand human language, which means that
00:06:22
becomes the new user interface, which is really interesting,
00:06:25
what this means for us in innovation in the future and how
00:06:28
we will interact with machines.
00:06:31
So the human-machine relationship is a big topic.
00:06:33
And keep in mind they don't know what they don't know.
00:06:36
So if you put something in, you don't explain it properly, the
00:06:41
output will be crap.
00:06:42
I think that's a slightly hopeful or hopeless romantic
00:06:46
perspective on it, that we get forced by the machines to put
00:06:51
better inputs in explain things better.
00:06:52
It makes us maybe better in explaining things and talking to
00:06:57
each other among humans.
00:06:58
Imagine that I'm more on the positive side, that the machines
00:07:01
can make us better.
00:07:02
That's my hope.
00:07:04
Ben: I am a lion review on this, dom.
00:07:05
I think it could be a boon for us to not tire ourselves with
00:07:11
the mundane and the prosaic and actually let the machines do
00:07:15
some of that, given its vast date of it, it has, and then we
00:07:18
can add on the things on top, the things that do require our
00:07:21
own thinking or our imagination.
00:07:22
So then, how does this work with, say, product management or
00:07:27
the board of product?
00:07:28
Because we've got AI and we're talking about innovation and
00:07:30
we're talking about product management.
00:07:32
What role, then, can AI play when it comes to innovation in
00:07:39
products?
00:07:40
Dom: Let me ask you something I would be super curious.
00:07:43
I wanted to ask you this last time, when we had our trap in
00:07:47
your role and I think you explained it so nicely what have
00:07:52
you ever experienced, among the product people and the design
00:07:57
and tech people, which was a hurdle, where you had a real
00:08:00
challenge?
00:08:00
What was?
00:08:01
The real barrier in that relationship for you.
00:08:04
Ben: The real barrier over time because I said most of my career
00:08:08
working in organizations is still working in organizations
00:08:11
that have had a strong kind of agile focus and they're now
00:08:15
trying to move towards more of a product mindset and having a
00:08:18
more of a customer-centric view of their world.
00:08:21
And one of the things that I think that people have always
00:08:26
found organizations have found difficult is how do you get
00:08:30
various silos actually collaborating together?
00:08:35
Well, there's never been anything which is mutually
00:08:38
beneficial reason for people to give up something to then work
00:08:42
with somebody else and, as a consequence, there's always been
00:08:44
very severe handovers between, say, design, product and
00:08:48
engineering.
00:08:49
Let's say so.
00:08:50
I think it's that collaboration between different silos of an
00:08:53
organization and the psychological in-group and
00:08:58
out-group type phenomena.
00:09:00
Dom: I don't want to deal with them because they're not part of
00:09:01
my group.
00:09:02
I love that you bring this up because, ben, I agree we always
00:09:06
experience this in our work relationships in the past years,
00:09:08
and I think that's exactly how we approach, or should approach,
00:09:13
ai.
00:09:13
It's like weren't our barriers and our behaviors and hurdles
00:09:19
where AI can actually help us overcome them.
00:09:21
And so I think, in your particular instance and we've
00:09:24
been there and have discussed that many times, probably in our
00:09:27
everybody who is listening to this innovation means something
00:09:31
different for everybody.
00:09:32
Product means something different for everybody.
00:09:35
Certain words are just a mis or different interpretation for a
00:09:40
tech person than for a designer or for a product leader.
00:09:43
Agile, even agile, means something different for certain
00:09:46
people.
00:09:46
Yeah, absolutely, and I think that's the beautiful part.
00:09:50
Imagine you go into a machine and say this is my transcription
00:09:54
of the meeting today.
00:09:55
Create me three versions one for a designer, one for a tech
00:10:00
person, one for the project manager, so that everybody
00:10:03
understands what to do and speaks in their language, and
00:10:06
you can actually level up the game by feeding an AI with
00:10:11
certain behavioral knowledge about this person because you
00:10:14
work with them.
00:10:15
There's great tools out there where you can do this and can
00:10:18
actually get from their LinkedIn profile their personality
00:10:21
insights.
00:10:21
So that's a way how I think technology will just help us
00:10:26
bridge the gap in communication and will help us understand each
00:10:30
other better and therefore hopefully create a better
00:10:34
outcome when it comes to agility and moving fast and moving
00:10:37
really agile and through a product life cycle.
00:10:41
Ben: Here's a challenge If we take the example where we're
00:10:44
putting, say, some meeting notes or maybe like a transcription,
00:10:48
which would be quite neat, right , and maybe there's a tool that
00:10:50
takes transcription and then we'll give an understanding of
00:10:53
behavioral profiles and the types of people that are there,
00:10:55
will then make a different version for each of them, I can
00:10:57
see that would be beneficial, but I see that more as like a
00:11:01
ferry than a bridge, if that makes sense, because what we're
00:11:04
not doing with that is creating that shared nomenclature, that
00:11:10
shared language, which can be really important.
00:11:12
So, whilst there is some things that maybe we would never create
00:11:16
a shared language around project managers always want to
00:11:18
call that that, product managers always want that Then if we did
00:11:23
just kind of get AI I never said this is the example, but we
00:11:26
could extrapolate it out to then we give it something, even
00:11:28
if it ferries back individual versions for those types of
00:11:31
people then are we not running the risk of not having the
00:11:36
conversations and really looking to seek and understand each
00:11:38
other and create that shared language which is so important
00:11:42
to create that shared understanding?
00:11:44
Dom: So, ben, I have two answers to your question and to your
00:11:47
challenge.
00:11:47
The first one is my probably Riga, new York.
00:11:51
Answer is like I need to move fast and I throw money against
00:11:55
the problem.
00:11:55
And if everybody understands what needs to be done is that
00:12:03
gets us move faster and be more efficient, then that's great.
00:12:06
I give you my honest answer In the field.
00:12:09
I feel you're absolutely right.
00:12:12
I think maybe I didn't do a good job in explaining it.
00:12:15
I think understanding each other machines can actually help
00:12:18
us.
00:12:19
If I gain insights how somebody understands a topic and help me
00:12:24
understand how to explain that better to this person, but at
00:12:28
the same time, the machine basically crafting this for me,
00:12:31
that will definitely bridge gaps and it will bring us together.
00:12:35
So of course, I can take it and the efficient part of me says
00:12:40
take it in, send it out, make sure that everybody gets the
00:12:44
same notes and in their words and language they understand.
00:12:48
But of course, there is the possibility sitting in a meeting
00:12:52
together and basically sensing the tone of voice and do you
00:12:55
know that we have tensions because somebody talks about a
00:13:00
certain word in a meeting?
00:13:02
We all had it yesterday about something simple like branding.
00:13:06
What's the difference between branding and naming.
00:13:08
There is a difference, right, and there's a massive difference
00:13:10
.
00:13:10
And when you talk with product people, they feel like, oh, I
00:13:15
need to brand my product, oh, you need to name your product.
00:13:18
Branding is what it looks like and what your company is, but
00:13:23
that's subtle nuance.
00:13:25
If you can, instead going down the rabbit hole for 10 minutes
00:13:28
and talking about this, imagine this could intersect in the
00:13:31
moment we speak about it and be a part and level us and make
00:13:36
sure.
00:13:36
Hey, hold on a second Ben, hold on a second Dominic.
00:13:39
This is what you're talking about.
00:13:41
This is what it means to you.
00:13:43
This is what it means to you.
00:13:44
Maybe you want to find a common ground and we build a product
00:13:48
like this.
00:13:49
By the way, oh, really, back in my days, yeah, in the last
00:13:53
company I worked for, we built a product on Microsoft Teams.
00:13:57
That's basically, and now it's getting scary.
00:14:00
I know a little bit creepy, but you had to opt in to do that.
00:14:04
But listening, in understanding the sentiment of the
00:14:08
conversation, gives me a feedback if I maybe should wait
00:14:13
and ask a question, because I talk 70% of the time.
00:14:16
Or it understands the words I'm using and is understanding that
00:14:21
my languages might be not correct.
00:14:23
So I think there's ways for me going towards where and machines
00:14:28
will be a part of us right, and AI will be seamlessly
00:14:32
integrated in our life and that's gonna be exciting.
00:14:34
It will help us bridge these gaps.
00:14:37
This was a long answer, then, but no, maybe a little bit of a
00:14:41
sounds, a little bit sci-fi, but everything I said is
00:14:43
technically possible already be quick dude.
00:14:46
Ben: Yeah, there's a huge amount of things that are technically
00:14:48
possible.
00:14:49
Doesn't mean like that, but the people are using mobile use and
00:14:52
more.
00:14:52
They're not just you know or just not using them yet.
00:14:55
Dom: What if I'm really something's night?
00:14:56
It's not designed the way we should use it now.
00:14:59
It's like there's barriers.
00:15:00
That's why people put cones on self-driving cars to stop them.
00:15:05
Ben: What I find interesting with what you're saying is that
00:15:07
they you almost have a AI facilitation there, because the
00:15:10
wrong good facilitation is to Hear for those differences and
00:15:13
understanding to help people gain that clarification.
00:15:15
Then the one thing that we know is that I'm gonna say this as
00:15:19
an absolute is that most meetings and workshops are shit
00:15:22
Because they don't get facilitated.
00:15:24
People there is an equal voice of the quiet person never gets
00:15:27
to speak up about it.
00:15:29
You know we did a webinar on this recently.
00:15:31
You know how'd you get the quiet person to speak up?
00:15:33
And not every workshop, that every meeting can afford to have
00:15:37
a skilled facilitator there to put the work in the beginning
00:15:41
and then Put the work in the session as well.
00:15:44
So what you're saying around AI perhaps giving us the prompts
00:15:47
and some clues and some ideas as to how to make the meeting flow
00:15:51
Effectively it's quite fascinating.
00:15:53
I think I'm where my brain kind of comes to a halt, maybe not
00:15:56
something I want to explore now, but you know how do those
00:15:59
interrupts happen.
00:16:00
I think that's more of a solution as a more of an
00:16:03
implementation problem than it is a fundamental conceptual.
00:16:07
So what if, then we're in this situation but we have lots of
00:16:11
silos in the organization and We've got, say, do we've got
00:16:15
design department, design and research, let's say, and they're
00:16:18
doing some fantastic work which just kind of goes unnoticed or
00:16:22
unheard of by the marketing department Because they're just
00:16:24
head down, ask that, working on what they've got to work on,
00:16:27
trying to make sure that it's all lined up, and probably
00:16:29
overlapping, doing some of similar work.
00:16:31
I've only got engineering good, just being lambasted because
00:16:34
they're late and the stuff that has come out is buggy and it
00:16:37
isn't really quite what the product manager envisioned.
00:16:39
I've only got product to a trying to kind of bring all this
00:16:42
together dealing with senior stakeholders, dealing with the
00:16:44
market, trying to get it all lined up, trying to make sure
00:16:47
that the thing that goes out as a success Right, it's a
00:16:50
nightmare.
00:16:50
Is there a situation or any tools out there that you know of
00:16:55
where, given some contextual information, it could actually
00:17:00
begin to produce some measures, like some metrics, which would
00:17:05
encourage that system to work more as one rather than those
00:17:09
four distinct parts?
00:17:11
Dom: It's super interesting then , that you bring this up, a
00:17:15
builder system that is Aiming towards that direction.
00:17:19
It comes from the fundamental product process.
00:17:22
Remember, the double diamond is a very famous one of use, what
00:17:27
many organizations use because easy to understand.
00:17:30
Imagine it becomes a single diamond and the first thing you
00:17:35
do is you training a model and a Data model with your data, with
00:17:41
your knowledge, what you're trying to accomplish.
00:17:44
It's very fact-based.
00:17:45
You're training it on your personas.
00:17:48
You creating, basically, synthetic personas.
00:17:51
You're training it on your synthetic companies.
00:17:53
So what's my company's goals?
00:17:54
The humans are all driven by motivations.
00:17:57
The marketing department has a motivation to some of them
00:18:01
probably win awards, but To actually make sure they have a
00:18:05
great brand awareness and the campaign and work with the
00:18:08
biggest influences and social media experts and whatever the
00:18:12
product team and the research team they want to create, build
00:18:16
a great product that's human, centered, right and that's
00:18:19
unseen.
00:18:20
Maybe the research team in particular, they are very keen
00:18:23
on finding this new nugget.
00:18:24
They're maybe two technology different sometimes and missing
00:18:27
the piece of is it actually solving the problem?
00:18:30
So imagine you put all this in the model.
00:18:32
That's the first time you spend , that's your, your beginning of
00:18:36
the new diamond and then this model works in Existence with
00:18:41
you through all the departments and simulates an anchor.
00:18:44
Us back to the model.
00:18:45
So is it actually solving the problem?
00:18:47
Is it benefiting our company?
00:18:50
Is it fulfilling our requirements on how much money
00:18:54
we need to spend, etc.
00:18:55
And then I feel like it's almost like a stingray, is like
00:18:59
it goes a little bit out there, because the human interaction
00:19:02
and intervention and iteration comes more in when the
00:19:05
simulation is getting closer to a final product.
00:19:07
The benefit of doing it that way is because it removes
00:19:13
Emotions and assumptions and Makes it based on facts and it
00:19:18
does a constant product market fit for you and a problem
00:19:22
solution fit for you In an iteration, all the time you work
00:19:25
in and all the time you give in more information.
00:19:28
It actually it gets it closer to, not sticky notes and some
00:19:34
crazy ideas or a great marketing campaign.
00:19:36
It gets it did.
00:19:38
Everything Perfectly fits together.
00:19:40
That's what we're teaching at Pratt University, by the way, as
00:19:44
an AI design course.
00:19:46
Ben: So my friend, david Pereira , the day put an article on
00:19:49
LinkedIn or post on LinkedIn saying you know, it's the future
00:19:51
of product management, just AI, that will the brother be any
00:19:54
role For product managers?
00:19:56
And say what you're saying there and I'm gonna like this,
00:19:59
roll with it.
00:19:59
You know, one of the traits you'll see in me is I do like to
00:20:02
blow up things to extremes, to kind of test our hypotheses a
00:20:05
little bit.
00:20:06
So I'm not gonna do it in this instance.
00:20:08
That's roll with it, right, and what you're saying, because the
00:20:10
things I would like to pick up, but that's roll of what you're
00:20:12
saying.
00:20:13
What will be the role then for For product managers?
00:20:18
And even then you know to a point, more this and take up
00:20:21
some of the engineering effort as well over time, do you think?
00:20:25
Dom: Yeah, I think the role of the product manager was always
00:20:28
misunderstood, because they're actually really creative people
00:20:31
and they really ever most of them they are who are really
00:20:35
good to have a great technology Background.
00:20:37
So I always worked as product managers were.
00:20:40
I felt there were actually the only ones in the room who
00:20:43
understood both sides it's like the human-centered designer,
00:20:46
what we're trying to accomplish for for product perspective and
00:20:50
but also the technology side in order to accomplish it and I
00:20:53
think their role will be actually becoming more important
00:20:57
Instead of going away, because that understanding of what I
00:21:01
look at and what I'm building and training the model and
00:21:05
making sure it's accurate is actually a big role to play.
00:21:08
I believe it is actually not removing that.
00:21:11
I think we will other areas See in a different light, but I'm
00:21:18
generally not thinking that AI will replace People.
00:21:23
I think it will make us think more critically about things and
00:21:27
Jell-Om things to your point.
00:21:29
So we will get a different kind of role, but the product role
00:21:33
will stay the same.
00:21:34
I see on your face, you don't believe me.
00:21:37
Ben: So no, I'm my thinking face .
00:21:39
That's because I had a thought and then I had another thought
00:21:42
and I'm like, oh bugger, what was the first thought?
00:21:45
Dom: Imagine that you would have a connection to an AI in your
00:21:48
brain that we're not going to.
00:21:50
Ben: No well, but my level of intelligence is always up for
00:21:54
debate, dom, that's not.
00:21:55
That's not assume anything here I need all the other cars get.
00:21:59
However, my first thought was then around.
00:22:03
Well, actually, you know, we talk about product managers as a
00:22:06
difference between managers and leaders.
00:22:08
I was looking at a post by a someone I know called Andrea
00:22:11
Albuquerque, who is talking about the future role of product
00:22:14
management and saying that a lot of it Well, actually, a lot
00:22:16
of the things that he suggested seem to be kind of more boring
00:22:19
Managementy type of things.
00:22:21
Right, because it's different management leader right leader.
00:22:24
You're doing something new and different.
00:22:25
People want to follow you.
00:22:26
If you're then also very good at managing people and doing a
00:22:29
lot of that Corporate or organizational management stuff
00:22:32
as well, fantastic.
00:22:33
I was always shocking Some of that stuff having new ideas and
00:22:38
being like come join me on this new idea.
00:22:39
I could do that, yeah, but actually then doing the
00:22:42
management EP, so I really struggled to do that.
00:22:45
So I wonder then what you're saying Well, it's a nice big can
00:22:51
, thinking I can.
00:22:52
There's that limit on product manager, product leader.
00:22:53
But there's also an element here about scaling and, and one of
00:22:56
the things I see very often is it was one of my pet annoyances.
00:23:00
In many respects, I'm nothing against the people that are in
00:23:02
these roles, but when you see the role of the product owner in
00:23:06
an organization and it's just a team level analyst, they're not
00:23:09
doing any owning or management of the product, they're just
00:23:12
doing the analysis on behalf or with the team, and that's a
00:23:15
really valuable role and skill.
00:23:17
That really needs to happen a lot of the time, and maybe I AI
00:23:21
helping there would be really quite fascinating.
00:23:23
I think it really accelerate things, but for me that isn't
00:23:26
what a product owners about.
00:23:27
So when people talk about product owner, and how does the
00:23:29
product owner role scale when you have to spend all of your
00:23:31
team working the detail, I'm thinking, well, maybe AI helps
00:23:34
those people to move away from some of that detail.
00:23:37
I mean it all coalesced in my mind somewhat To the question,
00:23:42
which is how can AI Help people scale their product management
00:23:48
and agile engineering efforts?
00:23:50
Dom: That's a great question.
00:23:51
I think, as I said right, it's like the simulations you do.
00:23:55
I think I need to stop, pause for a second before I answer
00:23:59
your question.
00:23:59
You said something extremely powerful the difference between
00:24:05
being really good at your job and managing people.
00:24:08
That will become bigger and different in the future.
00:24:12
That I think we will have a.
00:24:15
It will become more seen as a skill, because we have a lot of
00:24:19
people who go in certain roles and then suddenly are really
00:24:24
good at their job but, frankly, they are not good in managing
00:24:28
people and that leads to a great career in their job to become
00:24:35
miserable, right, and they become miserable, and I think we
00:24:38
will see that differently.
00:24:40
We will see a switch in that People managing might go back
00:24:44
even more because, imagine, it's about bringing things together
00:24:49
and make them work, as you said right, and it's like make sure
00:24:53
that everything works the right way.
00:24:54
I think coaching will become a bigger part in the future around
00:24:58
that, and so, before I go off in the rabbit hole, going back
00:25:02
to answer your question, I think what the AI will be able to do
00:25:07
is it will help you scaling things up by simulating things.
00:25:11
So, while you will be able to do one product.
00:25:14
Just imagine it helps you to do a multitude of products at the
00:25:20
same time, achieving the same thing you did over years in a
00:25:25
faster amount of time, maybe in a few months, because it
00:25:28
simulates everything.
00:25:29
And then it comes.
00:25:32
Ben: You can simulate the testing of multiple hypotheses
00:25:35
concurrently.
00:25:37
Dom: Already, Exactly as one of the things if you train the
00:25:40
model properly, if you're really good at your job, if you're a
00:25:43
really good designer, a really good product manager, if a
00:25:47
really good technician, all these things if you train that
00:25:50
model really well, all the simulations, all the testing,
00:25:56
all the things we are usually running through with human
00:25:59
assumptions right, and we do these user interviews and trying
00:26:03
to figure out is this what we are building really successful?
00:26:07
Imagine this is simulated and then your job becomes a
00:26:12
different one, becoming challenging it, becoming
00:26:16
critically thinking about it.
00:26:17
Right, it's like you have a different role suddenly.
00:26:20
But if you're talented in what you're doing, that will be so.
00:26:25
That will elevate the work you're doing and the output will
00:26:28
be more successful.
00:26:29
I think we will see more innovations that go to market
00:26:32
and die much earlier.
00:26:34
So companies will spend probably the same amount of they
00:26:38
should spend the same amount in their R&D departments than they
00:26:41
do now, but their results and the output will be significantly
00:26:46
higher and the impact will be bigger.
00:26:49
Ben: So you've been involved in some, but it sounds like pretty
00:26:53
ground-breaking and really interesting AI projects.
00:26:57
Can you think of which one in particular had the biggest
00:27:02
positive impact on a customer?
00:27:05
Dom: Yeah, two actually, but two touched my heart the most.
00:27:10
The first one is we built a platform that allows people to
00:27:16
use gestures to communicate with a smart home assistant for
00:27:20
Amazon, and the platform is called Sign and it allows people
00:27:24
with hearing disabilities to use sign language to interact
00:27:28
with voice assistants.
00:27:28
Suddenly, they can go to Something profound for us.
00:27:35
Go to a McDonald's drive-through and use sign
00:27:39
language to actually communicate and that's really Working with
00:27:44
these people was really opening my perspective on how we design
00:27:48
products and how we are fortunate we are sometimes
00:27:54
ignorant at the same time, we're walking through the world and
00:27:56
designing products, because voice and we're back to this
00:28:00
topic I think Voice will become really dominant in the next few
00:28:05
years.
00:28:06
Humane, the new AI pin is going in that direction, the rapid
00:28:11
one.
00:28:11
It's like everything goes in that direction.
00:28:13
What do we do with people with hearing disabilities?
00:28:16
So I think that platform was definitely one of the projects
00:28:21
that touched my heart the most.
00:28:23
Ben: Yeah, that's really, that's right, that's ridiculous.
00:28:26
It's one of those things that seems so obvious and I was
00:28:29
wondering yeah, you have Google Translate, which you know when
00:28:32
my child's but my children like Beyblades and they get them from
00:28:36
Japan and they're all Japanese boxes.
00:28:37
So we use Google Translate to see actually what the hell the
00:28:40
instructions are.
00:28:41
I haven't seen, I'm not aware of.
00:28:43
I hope that there is the equivalent app for sign language
00:28:47
so that people can sign into a camera and then it will give you
00:28:51
an explanation of what that person's saying, because it must
00:28:54
be an incredible challenge.
00:28:57
I was trying to think of ways that you know, my dad was
00:29:00
disabled, couldn't walk.
00:29:01
I never knew him standing up and I wonder what?
00:29:05
Like how much easier would his life be now, you know, if he was
00:29:10
still around?
00:29:12
Like how much different his life would be because of the
00:29:15
products that could have been created, particularly the
00:29:17
technology-based products that could have been created to help
00:29:20
him lead a more, an easier life.
00:29:23
I wouldn't say more fulfilling perhaps, but, yeah, an easier
00:29:28
life, because you know it's difficult when you have a they
00:29:30
hate the word disability but when you have a difficulty, a
00:29:34
difference, you know which the world's designed for you.
00:29:37
And yeah, I'm with you that if there's AI, anything that can
00:29:41
help with that surely is a positive.
00:29:44
Dom: Yeah, keep in mind, it's 11% of the population globally
00:29:48
living with any sort of disability.
00:29:49
So if you design from their perspective first, if you build
00:29:54
a product out of that perspective, you will probably
00:29:58
build a better product in general, because it helps us too
00:30:02
.
00:30:02
There's certain moments in life where I don't sit Wanna sit at
00:30:06
home and say, okay, google lights 30%.
00:30:10
Maybe I just wanna make a gesture right.
00:30:13
And so it has not just an impact on people's disabilities,
00:30:17
it has an impact on us, and I think that's where we talk a lot
00:30:22
about technology been in the first few minutes but it's
00:30:26
really about experiences and it's about how we engage with
00:30:30
the digital infrastructure, and I think we are so focused on oh
00:30:33
God, there's this new thing that can do these things.
00:30:36
It excites me so much we totally forget about.
00:30:40
Do I need this?
00:30:41
Do I need to do this?
00:30:42
Do I need to make my company now an AI company because
00:30:45
everybody is telling me that?
00:30:46
Or is there better solution to your point?
00:30:49
Maybe I should educate my managers and create a coaching
00:30:55
culture in the organization.
00:30:56
Maybe that's more beneficial and bridges the gaps between my
00:31:00
departments.
00:31:01
And now, this is counter-dictory to what I said
00:31:05
in the beginning, but I think there's always a balance between
00:31:08
human and machines and human and AI in the relationship.
00:31:11
You're fine.
00:31:13
Ben: So this feels like it segues nicely Get to talk about
00:31:17
some serious topics on ethics, and I think of and then we'll
00:31:22
listen to my episodes with Shane Hasty in particular, who will
00:31:26
know that when it comes to coaching code of ethics, I am a
00:31:31
big fan of the International Coach Federation, the ICF.
00:31:34
Their code of ethics I find very robust and very good for
00:31:37
people in the coaching type situation.
00:31:39
I think that when it comes to agile coaching and I would lump
00:31:43
product coaching in this as well there isn't yet a really widely
00:31:48
recognized and followed code of ethics.
00:31:50
So lots of the agile coaches out there and I suspect there's
00:31:54
also product coaches as well will by accident act in
00:31:57
unethical ways.
00:31:59
Now, when people are using AI to help them with their coaching,
00:32:03
perhaps and maybe they're going down an ethical route.
00:32:07
I suspect a lot of AI models out there AI programs would be
00:32:11
kind of not giving you unethical advice, which is maybe one way
00:32:15
in which AI is slightly more ethical than perhaps some humans
00:32:18
, just because of its vast knowledge, vast understanding of
00:32:21
what ethics really means and that whole environment.
00:32:23
So when we think about there are some ways where maybe AI
00:32:27
could make us more ethical.
00:32:28
What do you think of the big ethical challenges that are
00:32:31
facing product managers and agile people in the future with
00:32:35
a usage of AI.
00:32:37
Dom: One of the things you just said is probably the big
00:32:40
challenge what is ethical to you , to me, to anybody else?
00:32:48
And I think we live in a society where we see a lot of
00:32:53
double standards and we see a lot of people setting the tone
00:32:58
of what is ethical and not.
00:33:00
But people are different and people have different
00:33:03
perspectives and I think training an AI model on your
00:33:08
ethics means that you're training it on your biases and
00:33:11
I'm not judging them right.
00:33:12
I'm not saying this is good or bad biases.
00:33:14
I think there's certainly absolutely bad ones.
00:33:17
That shouldn't happen.
00:33:18
But the nice part is, since it becomes so widely accessible and
00:33:22
people will use it, it also will uncover the flaws much
00:33:29
quicker than before.
00:33:30
Then biases will be if something is trained in a
00:33:34
certain way ethically, that is maybe ethically wrong, it will
00:33:37
have a problem.
00:33:38
So to your point, if I rely on a machine to give me guidance
00:33:44
and that's why I'm very hesitant around coaching, I know a
00:33:49
couple of coaches from Hudson Institute, which is one of the
00:33:52
co-founders of the ICF, and I think what's so interesting is
00:33:57
they have a standard they set in place.
00:34:00
Let's assume this is a good standard and we train an AI
00:34:04
model and everybody uses it.
00:34:06
There might be a different culture.
00:34:08
Who sees that?
00:34:08
Ben: completely different and might not agree with that.
00:34:11
Dom: So the question is now then , if I'm using it and somebody
00:34:17
calls me out and says that's incorrect and I feel offended or
00:34:20
I don't feel treated well, how do we handle that?
00:34:23
Because the default can't be the machine, told me.
00:34:28
Ben: And I think this is where we get into the nuance of it,
00:34:32
because if you look at something , say, oh no, let's make
00:34:35
something up.
00:34:35
The product coach is code of ethics and that will outline
00:34:40
certain things, but this is an unregulated environment.
00:34:43
There is no central, regulated governmental body that's going
00:34:46
to come along and say you can't do that, you can do that.
00:34:48
We're not talking about war crimes here, but we're talking
00:34:52
about things that could damage individuals, institutions,
00:34:55
livelihoods, and so this is it.
00:34:57
There'll be a group of people who get together and decide what
00:34:59
that is.
00:34:59
Same way the ICF did it, same way it's done for agile coaching
00:35:02
.
00:35:02
At some point it'll probably have a product coaching as well,
00:35:04
and then people will gravitate towards that and say, well,
00:35:07
given this context and given my role, then I look at this and
00:35:10
this is something I can get on board with.
00:35:13
The danger then comes with these .
00:35:14
If you train AI on code of ethics and it becomes this big
00:35:18
homogenous blob, let's say, or there's just this big code of
00:35:21
ethics AI tool out there, then you're right.
00:35:23
I think that it will become absolutely useless because it's
00:35:29
not bound within the context that people will bring to it.
00:35:31
Does that make sense?
00:35:32
So I can kind of see if the ICF or it got together and I'm like
00:35:36
, oh well, that's pumping our code of ethics into some AI tool
00:35:39
and let's use this to help assess some of the situations
00:35:43
that we're being faced with and maybe give us a slightly more
00:35:45
objective view.
00:35:46
I can see that as being useful, but I agree, if it just becomes
00:35:50
a and they're one person's bias always the bias is being
00:35:56
looking to be applied in a situation where it isn't perhaps
00:35:58
contextually relevant, then it could cause some damage.
00:36:04
Dom: I think what you said is absolutely amazing, because I
00:36:06
think that you carved out the benefits of it right, and it's
00:36:10
like, let's say, we have a system that we teach it on, then
00:36:13
it becomes valuable.
00:36:14
But to your point, it's one bias, yes, but we have the
00:36:19
unconscious bias as well.
00:36:20
And let's say we are 10 white men in a room and we think what
00:36:25
we are establishing now is perfect, I would argue a lot of
00:36:30
women might not agree with us, and now we are maybe adding a
00:36:33
couple of white women.
00:36:34
This goes on and on, and so I think that's the tricky part of
00:36:38
it, right, how do I?
00:36:39
I'm absolutely for regulations, but I think the question is,
00:36:45
who do you put together in certain instances for these
00:36:48
ethical questions, who have no bias, which is impossible, but
00:36:55
at least create a set of diversity that is valuable.
00:36:58
And I think that's where I don't believe that machines will
00:37:02
taking over humans, because we will always need.
00:37:04
If everything is just rationally following a code,
00:37:09
then we probably miss what makes us human.
00:37:11
This is the nuances of it and understanding the subtle
00:37:15
differences in the right moment, in the right context, which
00:37:18
might be different, no matter what I said as a human, in
00:37:21
ethical requirements.
00:37:22
It might just not come across that way or should be executed
00:37:27
away in that instant or in that moment, and the machine will not
00:37:30
know it.
00:37:30
It will just follow the code.
00:37:31
I will give you a recommendation.
00:37:34
Ben: I'm wondering, dom, that we should probably begin to draw
00:37:39
this episode to a close because we're hitting the yeah per limit
00:37:44
.
00:37:44
I've had a lot to keep episodes .
00:37:46
I've still got a ton of questions I'd like to ask, so
00:37:49
maybe I'll ask you offline or maybe we'll phone some other way
00:37:52
to get this answered.
00:37:53
Perhaps you could come and do a live stream with us, dom.
00:37:56
We could field some questions from our listeners and a broader
00:38:01
audience perhaps will look to get that arranged in the next
00:38:03
month or two.
00:38:03
So, everyone, make sure you keep your eyes peeled on
00:38:06
LinkedIn and Instagram for some information and maybe when that
00:38:09
will be happening, if Dom agrees , when we stop recording.
00:38:11
I want to give people something practical, dom, so I'm
00:38:15
wondering if you could give one challenging, practical tip or
00:38:21
provocation for a product person or for an agile person, or both
00:38:26
, in how they can use AI today.
00:38:28
What is your provocative and challenging tip you could
00:38:33
provide to them?
00:38:34
Dom: Don't use AI.
00:38:35
Let me explain.
00:38:37
My recommendation is always understand the motivation of the
00:38:41
person that is in front of you and then use AI to rationalize
00:38:46
it from a product perspective, because, exactly how we started
00:38:50
the conversation, it was around that we have misunderstandings
00:38:54
or misconceptions and everybody follows their own.
00:38:57
As you said, the marketing team is already off building a
00:38:59
campaign, while the others are trying still to figure out how
00:39:03
to solve the problem.
00:39:03
And I think if you think about what everybody's motivation is
00:39:09
and then I'm using that information and feeding it and
00:39:13
have a rational consensus about it.
00:39:15
So asking the machine, hey look , these are all the motivations,
00:39:18
this is what I try to accomplish, what's the
00:39:22
rationality of it and how do I accomplish it you will probably
00:39:25
get really good recommendations and a less biased answer from
00:39:30
the machine.
00:39:30
And I think that, from a practical standpoint, sometimes
00:39:34
we make assumptions based on the conversations we had, and if I
00:39:38
put these assumptions in and ask a JetGPT for the premium
00:39:43
version to rationalize that for me and filter out all the human
00:39:47
emotions between it, you usually get a really good answer that
00:39:50
is less biased and less assumptionous than it was before
00:39:53
.
00:39:54
Ben: There is a nonviolent communication talking about the
00:39:57
things that you could observe, rather than that.
00:39:59
It also reminds me of I'm talking about kids a lot.
00:40:01
I did a homework with my children this morning and one of
00:40:04
the questions was can you pick out two words or phrases which
00:40:07
tells you that this was a horrible journey?
00:40:09
And it was talking about a Roman soldier, and the two words
00:40:14
seem to be violently regarding the weather, but also then the
00:40:18
word horrendous and I was like well, actually the violent waves
00:40:22
lapping over, that's something you could do violently slightly
00:40:25
it's a subjective label on it, but at least then it's talking
00:40:28
about the waves lapping over.
00:40:29
That's something which could be recorded and you could see To
00:40:31
say it was horrendous, and that bit of a word.
00:40:33
It isn't a factual statement, that's just somebody's opinion.
00:40:35
And if you were to run that through an AI, chat, gpt, for
00:40:39
example, as they just give me the facts of this, it probably
00:40:42
wouldn't come back and say it's horrendous, because that isn't a
00:40:44
fact, that's just a subjective opinion.
00:40:46
And what you're saying is if we can use chat, gpt or other
00:40:50
models are available Clod, yeah, barred, whatever it may be.
00:40:54
But actually we've had a difficult conversation, we're
00:40:56
going to have a difficult conversation.
00:40:57
Can you help me generate a different perspective on this?
00:41:02
Dom: Exactly, and I think that is extremely powerful.
00:41:05
There's a great book the Unseen Motivations, unheard
00:41:09
Motivations which is talking exactly about that, and it's
00:41:13
like how do I can change my perspective in the room?
00:41:16
I walk in the room and I have an agenda and I try to present a
00:41:20
new product and want to get everybody rallying and one
00:41:25
person in the room challenges me around the way I do the
00:41:28
presentation.
00:41:29
Honestly, my personal motivation of showing the
00:41:33
product changes to convincing the other person that I have a
00:41:38
great presentation, which means the way I walked into the room
00:41:43
is different to the conversation I'm having.
00:41:46
And that's how do I balance that?
00:41:48
And I think in these moments this is a real-time example.
00:41:51
Right, how do I do this in a real-time moment?
00:41:54
But sometimes taking a breath, walking out of the room and
00:41:58
saying give me one second, I need to take a breath, is
00:42:01
absolutely fine, by the way, I think that's ethically what we
00:42:05
all should do and embrace a little bit more.
00:42:07
Think a moment about it.
00:42:08
Maybe ask a chat GP here, bart Claude, any of these tools about
00:42:14
how to approach this, and you might get a different
00:42:16
perspective and you might walk back into the room and it will
00:42:20
maybe solve the problem in that case that the tension is gone
00:42:23
because you acknowledge, hey, the presentation is maybe not
00:42:27
looking how you expected it.
00:42:28
Let's talk about the product and let's make sure that the
00:42:30
next time we change the presentation in that way.
00:42:33
And I think that's where I see the power in terms of behaviors
00:42:38
and behavioral changes that AI can actually help us.
00:42:42
Ben: And I think if a lot of us don't get on that bandwagon and
00:42:45
start using it in that way, we'll be left in the dust.
00:42:49
Dom: Yeah, I totally agree.
00:42:51
Ben: I just think that I can hear certain people saying oh
00:42:53
you know, but we're humans and we should be able to do this.
00:42:55
Yeah, but we don't deal with it .
00:42:56
We're pretty shit.
00:42:57
We're pretty shit relationships and we're pretty terrible at
00:42:59
meetings and speaking honestly and we leave and we're like why
00:43:02
was that meeting crap?
00:43:03
Why didn't such and such speak their mind?
00:43:04
Why is no one going to do the actions that came out?
00:43:07
Well, if we could just take a moment and just get a bit of
00:43:10
advice, you know, from a third party, then we would do that
00:43:14
Surely.
00:43:14
And the fact is, in our pockets , we've got that third party
00:43:17
with more knowledge and we can even begin to comprehend.
00:43:19
He's waiting to, waiting for us to ask it questions that we
00:43:23
don't even know a question yet.
00:43:25
And so I think that if we don't get on this bandwagon in some
00:43:29
way, shape or form, I think that other people will and they will
00:43:32
overtake us.
00:43:33
So, dom, thank you so much for this opportunity to get to
00:43:37
explore this.
00:43:38
I think it's been a wonderful conversation and it's nice to
00:43:41
have such an esteemed member of the AI community such as
00:43:44
yourself.
00:43:45
I come and grace this podcast with your presence.
00:43:47
So thank you very much.
00:43:48
If people want to learn more about you, of course they can go
00:43:52
on LinkedIn, because I mean, who doesn't go on LinkedIn?
00:43:55
Unfortunately, is there anywhere else that people could
00:43:58
look to find out more about you?
00:44:00
Dom: I think LinkedIn is a good place to start.
00:44:02
You will find me there, Dom Heinrich, and very easy, and
00:44:08
then connect with me and you will get all the websites from
00:44:11
TOGG to Creative AI to Pratt University Everything you need
00:44:15
to know.
00:44:16
You will find it.
00:44:17
Ben: And then so much we didn't talk about.
00:44:18
We didn't talk about Pratt University, we didn't talk about
00:44:20
TOGD, we didn't talk about the Creative Academy.
00:44:22
So maybe we will pick some of that up on the live stream and
00:44:26
we are also going to record an episode on sustainability.
00:44:30
So, dom, thank you very much for coming along.
00:44:32
Everybody, thank you.
00:44:34
Thank you so much, it's been a pleasure.
00:44:35
Thanks for listening and we'll see you all again very soon.