Businessman using a digital tablet

The following is a transcript of our podcast conversation with Sarah Wilkins and Itamar Goldminz. You can listen to the full episode on Spotify or Apple Podcasts.


Sarah Wilkins

Hello, and welcome to Humans Beyond Resources, an HR podcast by Reverb, where we cover topics from culture to compliance. Reverb believes that every decision a leader makes reverberates throughout the organization, from hiring your first employee to training your entire workforce. We believe in building healthy, inclusive cultures that engage your team. I’m your host, Sarah Wilkins. Today, we’re speaking with Itamar Goldminz, VP of Corporate at EarthForce, and also an advisor, consultant, and people professional to talk about AI and HR. Itamar has worked with many wonderful organizations, enabling the humans in the organization to thrive while dealing with the challenges and opportunities of growth. I’m very excited to talk to Itamar and get his perspective on the use of AI and HR. Welcome.

 

Itamar Goldminz

Thank you. It’s great to be here.

 

Sarah Wilkins

As I usually do, first, I just want to learn more about you. So will you share more about your background and, you know, particular experience with AI and interest in that area?

 

Itamar Goldminz

Yeah, more than happy to. So I’m Inamar, I often describe myself as a reformed software engineer turned people practitioner. And I actually started my career as a software engineer, but quickly realized that I’m drawn more to the people puzzles than the technical puzzles. So initially I pursued this in various people -centric roles within the product and engineering functions, project and program management, engineering management, and agile coaching, and eventually transition to the people team. I led the teams at AltSchool and Grammarly, focused more on org design and facilitation, which I view as my superpowers, in hypersciences and as independent consultant. And more recently, I took a broader remit at Earthforce, which is an early stage company in the wildfire prevention space leading all corporate functions. I love challenging the status quo and bringing more science and evidence -driven approach to solving problems and using tools and automation to solve repeatable problems. So I guess that’s also what got generative AI and large language models on my radar earlier this year, noticing that they finally seem to have crossed a threshold where investing the time in mastering them started providing meaningful return on investment.

 

Sarah Wilkins

That’s great. Thank you so much for sharing. Um, I have a couple of follow -up questions in one. I love, um, the startup you’re working with now and the mission and what they’re trying to do. So thank you. And thanks to that team. And then I’m curious, you know, with your background in software engineering and then turned people professional, how do you think that’s helped you maybe be a better people professional or understand the, the people in the organizations that you’re working, that you’ve worked with?

 

Itamar Goldminz

Yeah, I think that the two key experiences that really translated over was one from just engineering in general. There’s a lot of heavy reliance on science and research and bringing that into practices. And that has translated quite well into the people practice as well. I said I’m not a big fan of the status quo, I always ask why in every organizational practice that’s done in a certain way, I’m curious to understand the why behind it. And oftentimes I found out that there’s a better way to do things, that there’s research that suggests that doing things in a different way may lead to better results. And so this mindset of look for literature, look for science, then run an experiment and see how it impacts your organization is something that I’ve, I think, carried across from my engineering practice or my days as a software engineer. And also early on in my career, I was part of a company that was building a lot of behavioral science into its product. And I think that that again, kind of the leverage on how to use behavioral science, how to better understand what the science teaches about people being people, how can we bring that into the practices and the experiences that we design in organizations is another thing that I carried and brought with me for my prior experience.

 

Sarah Wilkins

Those are great and such great experiences and a way to challenge the status quo because I think sometimes some processes across all you know functions could be put in place without kind of thinking about why are we doing it this way, is this the best way to do it, is this the most you know, human centric way, we kind of just maybe run through processes sometimes. So I love the questioning and understanding, you know, the behavior behind things to get, you know, to get the results you’re looking for. So a lot of our listeners are, you know, myself included, really want to understand how we can leverage AI more in HR to help speed up, you know, common HR tasks or processes. Yeah, sounds great. Happy to dive in. What What are some common tasks in the HR space that are really great candidates for AI?

 

Itamar Goldminz

Yeah, that’s a really great question. And I would start by making maybe a broader statement first. I think a big part of getting a successful outcome in working with these tools is having the right mindset on how are we going to be working with them in context in order to produce something valuable. I can’t just assume that they know everything that I know, give them a very short guidance and just hope that they’re going to, you know, be able to read my mind and produce a high quality outcome. And then secondly, I will always be the one that will have to apply some sort of finishing patches on what they produce. It’s never set it and forget it type of dynamics when I just completely offloaded off my plate and expect to get a, you know, a shippable product. There’s always like this last level of polish and refinement and adjustment that I would have to do myself. And I think that adopting that mindset and keeping those things in mind really helped me with any tasks that I would apply or

 

Itamar Goldminz

utilize these tools for, let alone in the context of specific HR tasks.

 

Sarah Wilkins

Thank you for calling those things out. I think that’s really important to start because yeah, you can’t, what you put in is what you’re to get out and then you can’t take what you get and assume it’s, you know, ready to go. So those are really great points. Thanks for calling that out before going into some of the details.

 

Itamar Goldminz

Yeah. And I would also say, so specifically around HR tasks, in many of them, we run into challenges that I believe AI is particularly good in helping us with. So for example, Well, the new AI tools are really good in helping us come up with ideas or insights. So, for example, when we’re looking for an activity that we may want to run at the next executive upside, or when we’re trying to make sense of the latest engagement survey data, right? So kind of making sense of data or coming up with like new creative ideas. It can help us and it’s really good at helping us unblock ourselves. So, when we have a rough draft or a half -baked idea that we need some help with developing it further, that’s another really great use for AI. And once we figured out what is it that we want to say or do, it can really help us write anything and provide us with a first strong draft. It can help us or anyone else in the organization get coaching or advice on a difficult situation that we’re trying to navigate.

That’s another very common HR -centric task that we often run into. And then maybe one other thing that I would say is it can help us learn new things, from summarizing articles, academic papers, corporate policies, to explaining concepts, or even help us think through common pitfalls in a certain approach that we might have. So learning and development is another really important aspect in which we can leverage these tools.

 

Sarah Wilkins

That’s great. I hadn’t come across the last one in what I’ve, uh, been consuming. So that’s really interesting bringing up it as kind of a learning and development tool. Do you have any more to share on that or things you’ve seen that have worked really well for organizations that you’ve worked with?

 

Itamar Goldminz

Yeah, I would say in many experiences or in many workshops that I’ve either participated or led a big part of mastering, especially human skills is this notion of practice and simulation. And these exercises are always awkward, and we don’t always have the right role models to learn from. And this, for example, can be a really powerful way to leverage these tools. Large language models, in particular, give us really interesting and really helpful results when we give them a very specific persona to play. I mentioned context in the beginning, So, help them describe basically how we want them to behave by giving them a certain persona really improves the quality of interaction and aligns it more with what we’re hoping to get out of them. And so, one thing that I’ve done and experimented with is really role playing with the tool. Say I have a challenging interpersonal interaction that I’m either preparing for or trying to and I basically want some practice before I run into the actual interaction.

I can start by just asking for general advice that’s a bit more theoretical, got some few pointers, some things to focus on. I can identify an area that maybe I don’t know about as much as I would like and ask for some more information and then just say, hey can we roleplay this? And then not even that, once we’re done role -playing, I can actually ask for feedback on how I’ve done. And so the entire learning cycle can basically be supported by these new tools.

 

Sarah Wilkins

That’s such a great example. And we, we do a lot of like leadership development as well. And that’s a really interesting way to kind of take the learning from a workshop and practice, right? Cause what you learn in an hour or two workshop doesn’t necessarily translate into a skill that’s ready to use right away. So having these prompts and the support using simulations like that would be really helpful.

 

Itamar Goldminz

Yeah. And I would say that there’s also, if that sounds maybe too big of a jump and too big of a leap of faith, I would say there’s also a half step there, which is coming up with the right scenario, right? So if there is a skill set that I want to practice, or if I’m designing a workshop and now I want to come up with the right role play activity, Again, using the AI, even to just come up with the right scenario that will demonstrate and give participants an opportunity to hone in that particular skill can go a long way.

 

Sarah Wilkins

Yeah, absolutely. One thing I hear a lot of people using it for job descriptions, and I’ve also learned or heard about the bias that can be present in that as well. So can you talk, share a little bit more or talk a little bit more about that, you know, both the use of it with job descriptions and also kind of the bias that can present when you are

 

Itamar Goldminz

using a tool like that? Yeah, absolutely. It’s one example of using the tool to help us write something that we’re trying to write. But with job descriptions, yeah, I’ve definitely have used that In the past, I would say my workflow prior to using these tools has always been look for similar job descriptions for inspiration and then trying to create some sort of a synthesis out of things that I’ve came across and liked where is now, I would probably ask the tool first to create a first draft for me and then give it feedback on how I would like to refine it from there. But yeah, I think in terms of bias, you’re touching on something that is still a challenge for many of these tools, which is these tools are only as good as the data on which they are trained on, the data on which they were trained on that leads to that outcome. And this is another example where the intern mindset or mental model is really critical. Well, because of these limitations, we can’t just take output out of those

 

Itamar Goldminz

tools and use it as is. There’s this extra level of scrutiny that we need to apply to make sure that it checks some of the not so trivial or obvious boxes, such as potential bias in the language that’s being used or in some of the terminology. One thing that I’m still mastering in these tools is a big shift from the way that we’re used to work with software is really this iterative nature of working with the tools on getting a certain work output, you know, as opposed to, for example, when we’re using Google, we type in a key search, we get a certain set of results. And if it’s not exactly what we like, we kind of try, go back to start and try again.

 

Sarah Wilkins

Do what you’re hoping, do what you want it to do, or take out things that are biased or for example. Those are great. So kind of moving forward, you know, as a people leader, what should you consider about the use of AI, you know, not only in HR, but across the organization? Yeah, so there’s an endless list

 

Itamar Goldminz

of consideration, and a lot of, you know, compliancy, kind of like risk and gotchas, and pay attention to this and pay attention to that. We can talk about those separately. But I want to call out or maybe focus on two things that may seem contradictory at first, but they actually more complimentary once you take a minute to think about them. So I think the first thing is it’s really critical to start small with these tools. There’s a learning curve for using them and they require a different way of engaging with them in order to get good outcomes. And that takes a bit of time to master. You need to go up that learning curve, have the perseverance and tenacity to go through this few stages of being a novice and not getting things exactly right. And so I found that it’s incredibly helpful to start by looking for small, low -risk activity that can help your team and your organization ramp up on the skill curve in a way that feels safe to them and get them to a good comfort level in using the tools before attempting that’s something a bit more high stakes, right? So even in the examples that we’re just playing around with, probably an internal communication, writing exercise or assistance may feel and actually be a lot lower stakes than drafting something that we know is meant for public consumption. So look for those small, safe to try experiments that can really help you ramp up the learning curve in an environment that feels safe and low risk. But then secondly, with that being said, it’s really important to go on this journey with a certain end in mind. The use of any technology just for the sake of using the technology almost never catches on. And we’ve seen it with some of the previous technological revolution, whether it’s the internet or whether it’s mobile, if you just use the medium for the sake of using the medium without really leveraging its benefits or you’re trying to do something that is meaningful to the business with it, it just doesn’t catch on. And so once you’ve created that basic comfort and competency level, it’s important to identify a big enough organizational pain point that you believe AI may help address and start pursuing that path. I think you only have a limited time before the novelty of using a new tool wears off. And you need to generate a notable, notable value for the business in order for the approach and the tools to stick and not just, you know, go on the wayside after like the cool generative AI workshop that we just did.

 

Sarah Wilkins

Yeah, no, absolutely. I think that’s such a good point, not just with AI, but any, in the people space that you implement, right, whether it be something per for performance or engagement, or things like that. So important to teach people how to use it and get them to truly adopt it and make make something more effective than doing it the old way or a different way kind of thing. So it’s a great point. Yeah. And like you said, there are so many different compliance things and policies and things to consider. You know, we don’t have to talk about that here unless you have a couple highlights and then I can share Reverb’s AI usage policy that we put together. We did a blog and made it public and so I can make sure to include that in the show notes just as an example, but yeah, anything you want to say on that front?

 

Itamar Goldminz

Yeah, I would just highlight that obviously there are general limitations and compliancy issues to be aware of when using this tool more broadly. Obviously, in the context of HR, employee data, privacy, and security are the things that really, really bubble to the top, and it’s important here to do the due diligence on the tools that you are choosing to use, their privacy policies, their level of security. But I would say, don’t let that risk deter you from not trying at all, right? find those safe -to -use and safe -to -try low -stake experiments, even from those perspectives, so you can start to get a feel for what these tools can do to you while you’re looking for the more safer solution when you’re trying to use it with higher stakes, more sensitive data sets.

 

Sarah Wilkins

Yeah, absolutely. And it’s really easy to find the privacy policies and the data usage of all of these tools. As a user of different ones myself, I know that’s one of the things I went to do first was just like take a read through and try to understand how they use the data that goes in.

 

Itamar Goldminz

Yeah, the rule of thumb that I often suggest people can start with is basically thinking about using the tools the way that you would go and ask for a good friend for some sort of professional advice. If I were in this situation, obviously, I would not disclose any company confidential information, and I would also work to anonymize or protect people’s privacy, so I may change their names or just refer to them by the roles. Those are incredibly useful rule of thumbs for your first step into engaging with those tools before a more thorough due diligence, if you do need or want to disclose more information beyond that.

 

Sarah Wilkins

Yeah, absolutely. Great points. We kind of talked about the risks a little bit with the compliance aspects. Any other risks you see of using AI and HR in particular?

 

Itamar Goldminz

There are things that are, I don’t know if I would necessarily put them in the category of pure compliance, but there are definitely things that we need to be mindful of, such as like more ethical use of these tools. just like any tool, we can weaponize them in various ways. And so defining not just what are appropriate inputs into the tool, but also what are some appropriate outputs is I think really important. We talked about the bias aspects and the need to test for that. And then I would say there are still challenges with these tools when we are trying to do more facts -based work. This is a phenomenon that’s often being referred to as hallucinations, and I would not necessarily go into the details of why that is and what causes it here. People can go and look it up. But again, kind of like going back to my internmental model, one other lens by which it’s important to use these tools and check these tools is around fact -checking and making sure that the information that’s being used there is factually correct. That is also why I oftentimes recommend starting on tasks where there is no right answer, or NORA for short. This is an acronym, I think, that the crew at Google came up with, because that guarantees not running into the hallucination problem right so when i’m trying to craft a creative message there is no right way to do so and so i’m less likely to run into hallucination related problems when i’m working on these kind of tasks.

 

Sarah Wilkins

I like that acronymym right answer yeah this has been really insightful and helpful and anything else you want to share like closing thoughts with people listening sure.

 

Itamar Goldminz

it’s pretty clear at this point that AI will have a transformative impact on businesses, at least at the same magnitude of other foundational technological innovations like cloud computing, the internet, mobile phones. And as HR leaders, we have dual role in navigating this reality, both helping the organization as a whole develop the necessary competency to understand the impact it will have on the business and leading by example in making sure that our own function is walking the talk in adjusting to this new set of tools. It’s one thing to talk and listen and read about AI and another thing to experience what it’s capable of. And so my closing call to action here is for anyone who’s still standing on the sidelines which is to find a task that’s small enough and safe enough to try and just give it a try.

 

Sarah Wilkins

That’s great. That’s a great, great way to end. And thank you so much for your time. I really appreciate it. And I know this will be a popular episode, such a great topic to be talking about. So thank you.

 

Itamar Goldminz

Amazing. It’s been a pleasure talking to you.

 

Sarah Wilkins

Thank you for listening to this episode of Humans Beyond Resources. Visit ReverbPeople .com to find free resources, subscribe to our newsletter, and connect with our team. If you haven’t already, subscribe to stay up to date on all of our upcoming episodes. We look forward to having you as part of our community.

 

Share:

Facebook
Twitter
Pinterest
LinkedIn
On Key

Related Posts