Why Visier
Products
Solutions
Developers
Resources
Customers

Human Truth Podcast | Ep. 14: How Generative AI Will and Won't Change the Future of Work

The potential of GenAI brings big claims and outsize predictions—both promising and perilous. We discuss with special guest, Dr. Anna Tavis, Professor of Human Capital Management at NYU.

Podcast episode 0201 GenAi Dr. anna tavis

It’s the Human Truth Podcast where each episode we take a deeper look at a popular workforce statistic ripped from the headlines and ask: What is it about? Is it accurate? And, why should we care? This week are discussing how GenAI will affect the future of work.


The age of AI transformation is upon us as you’ve no doubt heard. Opinions and forecasts for what it means for the future of work run the gamut from promises of accelerated productivity and unprecedented innovations to predictions of AI taking jobs and wiping out humanity as we know it. 

The World Economic Forum predicted that by 2025, 85 million jobs will be displaced by automation and technology, but that AI will also create 97 million new roles. 

To discuss this finding and try to make sense of so many big claims and outsize predictions, we're joined by special guest, Dr. Anna Tavis, Clinical Professor of Human Capital Management at New York University, and a well-known thought leader on what to expect from the future of work. 

On the podcast this episode:

  • Host, Ian Cook is Visier’s VP of People Analytics

  • Guest, Dr. Anna Tavis, clinical professor and academic director of the Human Capital Management department at NYU; co-author of Humans at Work; Senior Fellow with the Conference Board; and the Academic in Residence with Executive Networks

Mentioned in this episode:


Skills Survey Report download video

Subscribe to The Human Truth Podcast wherever you listen to podcasts & never miss an episode!

listen to the human truth podcast on google podcasts
listen to the human truth on spotify podcasts
Listen to the human truth podcast on apple podcasts

Want to get in touch? Email us at podcast@visier.com


Episode transcript: 

[PRODUCER] Expect huge transformation when AI and people analytics come together. Vee is Visier's AI digital assistant. Ask Vee any question about your people data and get an immediate answer, even suggestions for how to further drill down. Want to see Vee in action, contact Visier to get on the wait-list now.

It's the Human Truth Podcast, where each episode we take a deeper look at a popular workforce statistic ripped from the headlines and ask: What is it about? Is it accurate? And, why should we care? The age of AI transformation is upon us as you've no doubt heard, opinions and forecasts for what it means for the future of work run the gamut from promises of accelerated productivity and even unprecedented innovation, to caution about the possibility that AI could take your job and also wipe out humanity. The World Economic Forum predicted that by 2025, 85 million jobs will be displaced by automation and technology, but that AI will also create 97 million new roles.

To try to make sense of so many big claims and outsized prediction, let's get into it with host Ian Cook and our special guest, Dr. Anna Tavis, Clinical Professor of Human Capital Management at the New York University and a well-known thought leader on what to expect from the future of work.

[Host, Ian Cook]

Hi, I am Ian Cook, the host of the Human Truth Podcast, where today we are talking about generative AI and what it means for the future of work. To discuss this topic, I am joined by Dr. Anna Tavis, who is a Clinical Professor and Academic Director of the Human Capital Management Department at New York University School of Professional Studies, a Senior Fellow with the Conference Board and the Academic in Residence with Executive Networks. She's also the co-author of Humans at Work: The Art and Practice of Creating a Hybrid Workplace, and is frequently invited to speak about this future of work. Dr. Tavis' career also spans the professional side, she was head of Motorola's European OD function based in London, Nokia's Global Head of Talent Management based in Helsinki, Chief Learning Officer with United Technologies Corporation based in Hartford, Connecticut, and Global Head of Talent and Organizational Development with AIG Investments based in New York. What a career, Anna, what a fantastic array of experiences and learning to bring to the conversation. It's a genuine pleasure to have us today, thank you for joining us, we're looking forward to it.

[Dr. Anna Tavis]

Thank you, Ian, I'm excited to join you as well. You guys are leaders in this field and it's always very productive to talk to you. Lots of new ideas and opportunities come out of these conversations. Thank you.

[[Ian Cook]]

Oh no, you're welcome. And I think that's a great place to start, our theme today is generative AI. There's lots and lots of conversation around generative AI, a place I'd like to start with is, and I may be just kicking us off with a little bit of fun, what are the craziest claims or what are some of the wildest claims that you're seeing in the press? I think we're going to get to the realities, the practicalities, but I think a lot of the times we see and hear things, the promise of generative AI seems it's going to solve everything possible. So, I wondered if you had a favorite moment recently where some claim is made for generative AI that just seems a little bit beyond what's possible.

[Dr. Anna Tavis]

Yeah, no, I think that it's more on the dark side, that the AI is going to destroy humanity, I think before that happens we are going to be destroyed by climate change. And I'm here in New York City where we just experienced this firestorm coming from Canada, our friendly neighbor, so I think that even if it is in the box, in the future for us, we have to be worried a lot more about other types of dangers to humanity and AI could be actually a solution to that.

[Ian Cook]:

Brilliant. That's a great place to start and I fully support your perspective there, before the bots take over we have a planet that we need to look after, I stand very much with you there. So what would you see, again, having looked at your career across both the world of business in terms of how talent development is done in small and large organizations as well as your academic studies into how work happens, what do you see are the promises of generative AI? Or, what's different about it, how could it help a business?

[Dr. Anna Tavis]

Ian, I think that the most important benefit that's coming our way is our ability to personalize at scale. To put it in one sentence, we've been on this path of human-centric design for our organizations, we brought data to help us understand where we are going because I think that was the nearest sort of breakthrough in human resource management. And now we have the ability to actually deliver to that promise through the vehicle of AI as it's becoming more and more distributed and available to us. And just to give you an example, I'm currently working on a book on digital coaching and just looking at the evolution of coaching from being an exclusive, very kind of a satiric and elitist tool for the senior executive development, where now we're seeing companies integrating this and bringing in AI coaches to the entry level employees, and that would not be possible if humans were involved, it's just not humanly possible.

But with the bots being in and some of the early research that's emerging on the interactions between humans and AI, they're actually very promising and very, very positive. So that's just one illustration of how the dream of personalization and customization is being delivered now through the tools that were not in our possession before.

[Ian Cook]:

I love that perspective, because often AI gets positioned as this element that is counter to humanity: Oh, the bots are going to take your job. What I appreciate in, again, the kind of positive element of what you're highlighting here is, actually no, the bot's going to make you better at your job. Just-

[Dr. Anna Tavis]

Exactly. Yeah. In fact, I think the bots, again, it will all depend on in whose hands we are going to find those tools to begin with, and that could be a different conversation, but I actually think that the bots are going to make us more human. As I've been looking into some of the experiments that are going on, they're all early stages, but for example, in the space of inclusion, I've done quite a bit of research on empathy and how these bots are able to prompt and nudge individuals to be more empathetic and humane. And one case that I've been particularly interested in, one of my colleagues has a neurodiverse child with Asperger, and a dream of a parent is to have a bot that's going to be available and a companion to the child of his when he's in a workplace and other types of situations, to support his integration into different organizations, et cetera, that before were only available at a very, very small scale.

[Ian Cook]:

Yeah. And I think there's two components I really, again, I really appreciate in what you're highlighting is, one is the personalization. I mean, I was laughing on the inside because talking with some colleagues this morning and they were talking about, we had a burger analogy going for various reasons, and one wanted sesames on their bun, one didn't want sesames on the bun, and I was like, "You know what? This is the 21st century, you can have both." And when we have generative AI from a learning perspective, a guiding, coaching perspective, my sense is, from what you're suggesting, is it's going to respond to your personality, it's going to respond to your learning approach. It's going to start to be able to go back and forth on understanding how do you best consume the information I'm sharing. So that's a crazy amount of personalization.

And then the scale, I worked as an executive coach back in the late '90s, it was a great gig if you could get it because you were kind of in this hallowed space of talking to senior executives, you were making a big difference. I can imagine scaling that across 4,000 people through a bot is just the elevation in performance for individuals, which then collectively elevates the business. That's really, really, really fascinating.

There's all this conversation of, I think we should touch on this because a lot of the conversation is around, my job's going to go away, there's clear and highly understandable fear around that. What's your perspective on the bots going to destroy work versus the bots going to just change work? How do you see that?

[Dr. Anna Tavis]

Right. Yeah, I'm more on the optimistic side of this battle that's going on to where to place AI. First of all Ian, I know Visier is in the data business and the quantification business, I personally think that jobs is the wrong unit of measure when we talk about jobs are going away. I think every job will have a component that I would say will be happily automated, we all have been in jobs where we want to free ourselves up to do the real creative work, et cetera. And I think we need to, this is where the whole movement towards skills based HR is moving in the right direction. I'm not sure that skills is exactly the right unit of measure, but we definitely need to look at something more precise than jobs. And that's the thing, and the other thing is that if you really think about a job, it's really a social construct more so than a particularly task related construct.

Obviously the task is in the core of it, but we all know about organizational social capital of a job. We talk about collaboration, we talk about all sorts of different alignments that need to happen for that job, for that task to be executed. And so at this point, I think this is why I'm thinking those numbers, these are certain elements of the jobs, I can see more of a fragmentation in parts of jobs being automated, but other needs are going to be available, other qualities of the job or functions of the job are going to be more enhanced. We talk about customer service, we talk about nursing for example, do we have to have these highly trained nurses whose value is really with a patient, to be spending time in entering data and doing all the housekeeping that could be performed by a robot, for example? So I think that is an example of how I see that job transformation happening with AI tools available to us.

[Ian Cook]:

And I think you can make a great point around, are we counting jobs or are we counting work? And are we looking at work being done, and the remuneration worth for work? One of the first studies that I've come across where generative AI was put in place was to learn the behavior of the best call center agents and to return those as prompts to other call center agents. Significant lift in terms of overall performance for the call center, but some real complex human dynamics in terms of the generative AI is learning from the best and provisioning the rest to be able to follow that. So how do you build reward? How do you keep the best growing? It raised lots and lots of questions about this evolving nature of work and how humans interact with it. So, I don't know that we know the answers yet, but I think we're very much... I think, what is both clear, and I'll again be interested, our perspective is this is coming, there is no stopping this change in technology, the we're not going to go backwards. How would you see it, is it here to stay?

[Dr. Anna Tavis]

Yes. Definitely. I think the genie is out of the bottle and I think that it doesn't matter who signs those letters saying stop the development, stop the rollout of these technologies, they've already been around, it just became publicly available to us. But I think now, our time would be better spent thinking about, and I know it's also a trendy word and a lot of conversations around alignment, how do we align these technologies? I think we've learned a lot.

What's really encouraging to me, we just had a coaching and technology summit at NYU and we had, most of the conversation was about AI coaches, it was very interesting, and the interactions between the human coaches and AI coaches. And it was very, very significant to us that in the front of the conference we put EEOC Commissioner Keith Sonderling, that's in the US Equal Employment Opportunity Commission that really regulates the workplace, and there was some symbolism in that, that we are now beginning to think about how we're going to be containing, how we were going to be directing these type of tools in the organizational setting, much, much earlier than what we've done before with other other tech.

[Ian Cook]:

Yeah, and that sort of leads me to the next area I wanted to explore, which is the challenges of implementing AI, specifically generative AI. And I would share your perspective that I have never seen governments, states, different industry bodies, so quickly grasp onto and start to create frameworks for legislating the use of these tools. It has moved, compared to previous changes, it's kind of moving at super lightning speed. And again, potentially going back into what you were seeing around this coaching event that you were at, what challenges were people running into in terms of actually being able to take this new technology and start to use it in meaningful ways in the workplace?

[Dr. Anna Tavis]

What's really interesting, Ian, to me, that the most resistant group was the coaches. So there is a resistance to change, a resistance to consider it, literally running away from looking at the opportunities and jumping on board and helping drive it and direct it in the right way. So I think that that's where we see a lot of fear, and we are very familiar with this in a change management process, it's the legacy group that's going to be most resistant.

But where we see, everyone was pointing out, a lot of pull was from within organizations, within the employee populations, who are saying, we want this, we want to have access to these tools, we want the support. We never had good coaching coming from our managers because we know why, that humans are overworked, overburdened by other tasks, et cetera, and here's a tool. So, I think there's a lot more receptivity in the market, the legacy group is beginning to just get reconciled with the fact that it's coming and there's no stopping it. And I think that with the right kind of guidance, the governance, as in everything else that we are bringing new, is going to be critical.

[Ian Cook]:

And again, I appreciate where you went with that answer because I think oftentimes when people think about technology and technology deployment, they go to this question of, well, how do I get the servers in place? How do I put the model in place? How do I train the model? They look at the technical barriers to this change, and what I hear you hitting on, and again, what was sort of highlighted in other articles I've seen is, some of the technical challenges are actually fairly straightforward, we know enough about how to train generative AI. You take the model, you take the data set, you run through a training cycle, it's an understood path, not an easy path, but it's an understood path.

What we don't necessarily understand, and what we have resolve for less is, well, how are the different players going to feel about it? Because one of the things I would test, but one of the things I hear amongst that coaching population is like, well, this is my work, this is my skill, this is my knowledge that is actually being encoded into the AI, it's automating me, that feels... I can understand having a reaction against that.

[Dr. Anna Tavis]

Yeah, exactly. And I think this is where my point about us really, for the first time, that's a scale understanding, what makes us human. It's this comparison with the other intelligence that is different from ours, which is evolutionary intelligence. We know that what we have now has evolved for the millennia, and we carry that information, as we know from all of the neuroscience and other types of sciences, genetics, that is coming in, and that's very, very different from the type of intelligence. So we never really competed at that level. I think previous technologies were around physical strength, right? We could lift and then the horses were running, then we trained animals to do certain functions that had that innate ability to be faster than the humans, and we tamed, domesticated those animals and then we came up with the machines, we can fly, et cetera, et cetera, but it was primarily at the physical level.

And I think what's a little bit maybe scary to a lot of us is that now it's in the space of intelligence, synthetic intelligence, I would call it, or artificial, where we get some competition. But I think if we look at chess, we know that the computers are much better at chess than the humans are. At the same time, what I've noticed is that there are actually, that chess is becoming more and more popular. And in fact, we had a Netflix series that became so popular, very good, and they had a woman chess player there. So there's something different about, and obviously the whole point was... Capturing the whole point of the movie, I think if we compare it with a match with Deep Blue I think it was, the first IBM chess player, playing against Kasparov, the champion.

So I think that this is the same thing and it's a very uncomfortable feeling in the beginning, but I think we will need to learn to live with this additional type of intelligence. And what's very interesting to me, Ian, we kind of talk about machines and humans, what is emerging is a parallel conversation about us beginning to appreciate the intelligence of the animal, and their kind of connection to other types of intelligences that we did discuss, but then the more esoteric closed circles. But now that we have artificial intelligence, human intelligence, and all of the other alternative types of intelligences, we realize that there are multiple of those. And so how do we lean into our strengths and uniqueness of humans, at the same time delegate the other types of work to other intelligences and live together as a community.

[Ian Cook]:

Yeah, no, that's a really compelling and interesting picture of the future that we're working towards. So again, just quickly digging into this notion of coaching and various other uses of generative AI. Did the question of hallucinations come up? I mean, the press is very quick to feature components where the AI literally makes things up. I mean, the case example that was entered by a legal individual is one that generally has got a lot of attention. So that question of hallucination where the AI fully gets it wrong, did that come up at all in the conversations as a challenge?

[Dr. Anna Tavis]

Yeah, of course. But what I loved about one of the comments that the commissioner, EEOC Commissioner, made at this conference was, humans make a lot of mistakes and we kind of take it for granted. And his comment was about the black box, everyone is complaining about the black box. What about the black box of the human brain?

[Ian Cook]:

Yes.

Dr. Anna Tavis:

It's even darker than... At least we can take apart and breakdown of what's going on on the technology side, but humans are a lot more non-transparent and ambiguous and subjective than any of these technologies that, at least at this point, we came up with.

[Ian Cook]:

Yes. It almost becomes a defining feature of intelligence. So that's an interesting perspective.

Dr. Anna Tavis:

Exactly. We're very afraid of hallucinations of the machines, but what about hallucinations of the humans? And I think it... Yeah, go ahead.

[Ian Cook]:

And we have lots and lots of proof of the hallucinations for the humans, just systemic bias and the bias that riddles many aspects of society, is all down to that decision made by a person that isn't transparent, isn't opaque, isn't testable. So yeah, I think that's a fantastic piece to share.

So a great place for us to take a break. It's a fascinating topic, I appreciate digging through it. We're going to go to a quick break and we'll be back with more commentary around generative AI.

[BREAK - AD]

You can ask Vee any question about your people data and get an immediate answer in natural language. Some things you can ask: What's the average time to hire in sales? Which employees are at risk of resigning? Are our L&D programs making an impact on revenue? See Vee for yourself, contact Visier to get on that wait list now.

[Ian Cook]:

And we are back. I am Ian Cook, the host of the Human Truth Podcast from Visier, and today I've been joined by Dr. Anna Tavis, Professor of Human Capital Management at NYU, and author of Human at Work. We've been talking, going quite deep in this whole transition to generative AI and what it means for the future of work. Exploring both some of the hype and challenge as well as some of the realism and the opportunity, I think we've got a nice balance there.

One of the things that comes up a lot around generative AI that we touched on but haven't yet gone deep, and I think this is very relevant to the sort of CHRO leadership community that listens to us is, how are regulations going to play out? We've seen some of the human response and you talked about the coaches who have that reaction that this shouldn't happen, this can't go forward. At the same time we think the technology is so compelling that it is likely to be used, and then there's regulation in the middle of that. What are your sense or what are you seeing in terms of how tight the regulation will be or how loose the regulation might be, and how that might be shaped by society's response to the potential of this technology?

[Dr. Anna Tavis]

The regulations conversation is very central, I think, to the workplace right now. And one thing that I've learned talking to a lot of employment lawyers as well as the commissioner, is that everyone is responsible for their outcomes. We are responsible for the outcomes, and as far as regulation around the outcomes is concerned, the laws are pretty clear. We can't discriminate, we can't recruit and fire people based on bias, et cetera. So those should be the guidelines for anyone thinking about using these tools in their workplace. Because, and here's the thing, I think that the question then becomes, if there's some level of breaking of that law, discrimination occurs, who is responsible for this? Is it going to be the employer or the vendor who provided the tools? And if I were an employer, I would go on the conservative side and say, I'm responsible for the outcomes.

So what I see is less around, let's come up with a new set of laws and new agencies et cetera, et cetera, that might come down the road as these technologies evolve. From a very practical perspective, I think that it is a shared responsibility between the vendor who's bringing in the tools and the employer who should be very vigilant around the outcomes they want to see and receive. Because at the end of the day, all the laws, the old laws of the workplace, apply.

[Ian Cook]:

Still apply.

[Dr. Anna Tavis]

As I said, we need to be very vigilant about it. And if not, reverse engineer the tools to make sure. Because I mean, giving people access to the algorithm doesn't make sense, anyone who works with algorithms, so it's just a window dressing. I think it's really, really about accountability, shared responsibility and partnership between people who are working on the tools and people who are actually going to be embedding them in their processes and methods.

[Ian Cook]:

And I think that's really strong guidance, especially if you're a CHRO, and New York has already passed the regulation that makes it really clear, any decision about a human being, be it hiring or firing or promotion, that uses an algorithm either to support or make the decision, the employer still holds accountability for that. You can't say, oh, we used this machine and it said so and so is better, so not our problem. It's like, it's still your problem. And I think that's one of the pieces of education that is very important inside an HR group is, yes, the machine is here, the generative AI is here to help you and guide you, but if you don't feel good in your gut about the decision in the end, then you have to explore why, because we are accountable for that decision.

And I think then from the Visier perspective, we've always taken this approach of transparency around, not just what's the output, but what's the algorithm behind it? Why was that chosen? What was the testing that's been done on it? It strikes me a little bit like the academic sphere where you don't get to pull this research without five or 10 independent people crawling all over it and making sure you've done your work well. Again, in the right places, I see that kind of approach taking place. We have a machine learning algorithm for predicting some of these likelihood of resignation, and we're fully transparent, we publish the validation of that back into the application, kind of exactly for the reasons you highlight, Anna. It's like, we have to hold ourselves up to a higher degree of transparency to build the trust, to build the use.

Looking forward then, we started with what I feel is an optimistic view, I think one of the ways to get these kinds of technologies adopted is to take the optimistic view. Looking through your crystal ball, based on all of your experience, how do you see generative AI helping, contributing to more human-centric workplaces? How do you see this developing over time?

[Dr. Anna Tavis]

Yeah, no, I hold a lot of hope for this to be really transformational for the workplace. Again, in the spirit of making the workplace more human-centric and democratize a lot of the benefits of working for organizations that were primarily felt at the top of the pyramid, that corporate pyramid, rather than at the bottom because of what these types of tools can... because these types of tools can scale, is their principle advantage, and be as sort of imitative of the human quality services that will be available to anyone in their organization.

And that's where I also think, something that I mentioned before to you Ian, that I think we need to think at the macro level. Right now there's a lot of thinking that's going on at the micro level of jobs, but I think at the macro level of organizations, we need to be really thinking about, what does it mean systemically if everyone will get development. We know that access to development opportunity, career advice, et cetera, et cetera, will create a lot of demand. People would want to be moving, there needs to be a lot of mobility, upward mobility and opportunity, et cetera, et cetera. This is where organizations need to be preparing themselves to become those learning systems, learning environments and cultures that we've been hoping for for a very long time. And a lot of it was at the purely academic level, but I do think we underestimate the overall impact of these technologies, positive one, but the demands are going to be so much more significant.

[Ian Cook]:

No, I think that's a really wise comment, Anna, this whole notion of accelerating learning. We see it inside our own groups where people work on a... They use a new technology, they work on a certain aspect of our technology, three to six months they've mastered that, they want to work on or do something new. We have a more open perspective, yes, you need to learn at a certain pace, but at the same time, you also need to work on things that we need or help move it forward, so we can't move people every six months, but we can move them every 18 to 24 months. And that's again, an intentional piece around learning, not getting bored, et cetera. What I'm hearing you saying is that generative AI is going to actually scale that and potentially make it faster.

[Dr. Anna Tavis]

Yes. And just think Ian, before, the employers could get away with maybe paying people a little bit more, and compensation was the driving factor, and we've been talking about motivation, we've been talking about intrinsic commitments, et cetera for a while with pay still being at the top of the choices that people make. But I would imagine that we finally reached that Nirvana, and I do it in quotes, unquotes, where people are motivated by learning and development and ambition and all of that, that will be a huge challenge to a lot of our organizations. And you won't be able to get away to say, my generation had to work so hard to get to where you are, that argument is going to be destroyed in five minutes.

[Ian Cook]:

Yes. And I mean, I feel like that argument's been overused for a couple of decades now, and if you hear yourself saying that, it's like, well, it took me 10 years to progress, then you aren't paying attention to the fact that the world has moved on. So, you're a CHRO, you're in a learning space, you're hearing about this generative AI, do you have two or three pieces of guidance for how somebody gets themselves educated and actually starts bringing this into their organizational approach?

[Dr. Anna Tavis]

Yeah, no, I think the old advice of start slow, of small samples, definitely find an area that, for example, coaching, we know that this now is becoming sort of a fad and we'll see where... But there's a level of maturity that some organizations have achieved by rolling out these more available accessible tools. So that's number one. Number two, I think that the companies need to be very savvy and much better prepared to select their vendors and people they want to work with. The trusting, it's not just the procurement, it's like who are you trusting with the technology that you are going to bring in that's going to help you run your organization in the 21st century?

[Ian Cook]:

Yeah. No, that's good advice because there are still a number of vendors like, oh, well, we can't tell you how it's done because our algorithm is proprietary. And for me, that's always a warning sign, the transparency has to be there.

[Dr. Anna Tavis]

Yeah, and I think something that you just said Ian, that you approach it almost like clinical trials, right? With technology or academic work where you have to have external points of view, et cetera, I think knowing who you are buying those tools from and being in a trusted relationship and joint accountability, understanding that if something goes wrong, you know you are in it together, because things are going to go wrong, these are all new technologies, so I think that that's where you need to be.

And then invest in development, developing tools, preparing for that next very ambitious generation coming into the workplace, it's only going to get worse. If we complain that the people are much to fast to look for promotions and other types of recognitions, this is where it's going to go. So how are you going to create an environment where those ambitious individuals want to stay and work for you? And that's where the tools, using the right tools and coming up with a really creative and meaningful development approach is going to be absolutely critical.

[Ian Cook]:

Yeah, so that's fantastic Anna, thank you. I think there's some really sound places, really sound advice. I think that's a great place for us to wrap up so that people can think about, how do they build a rapid learning, evolving capability inside their organization. So, thanks everyone for listening to today's episode of the Human Truth Podcast. We've been talking about generative AI at work with all the opportunities. Dr. Anna, thank you so much for all the wisdom, guidance, the expertise you've brought to the conversation, I have learned a great amount from it, I hope our listeners have too. And I'm your host, I'm [Ian Cook], and we'll be back next time to discuss another fascinating workforce statistic. Thanks, everybody.

[PRODUCER]

Thanks for joining this episode on the Human Truth Podcast, presented by Visier. More links and information presented on today's show are at visier..com/podcast. Subscribe wherever you listen to podcasts. The Human Truth Podcast is brought to you by Visier, the global leader in people analytics, whose mission is to reveal the human truth that helps businesses and employees win together. Today's episode was produced by Sarah Gonzales with technical production by Gabriel Kava. Ian Cook is our host. See you next time. And until then, visit us at visier.com/podcast.

Back to blog
Back to blog

Recommended resources

All resources