One of our members (a dental practice) used AI to summarize their notes and recently asked how people are addressing AI in the workplace. We’ve discussed AI and hiring biases on the podcast (a very long time ago!) As expected, more uses are coming to light. Because of that, we’re back with updated guidance and information regarding using AI in the workplace, and our focus is on HIPAA compliance and showing ways employees could unintentionally break HIPAA rules. There are several ways AI can revolutionize your work life and that of your employees, making tasks easier and more efficient. Learn more about the exciting possibilities in this week’s episode of What the Hell Just Happened?!
Transcript
Voice Over: You’re about to listen to an episode of What the Hell Just Happened. Join Paul Edwards and his guests as they discuss interesting HR topics and solve some of our listeners’ submitted questions.
Paul: And occasionally I’ll go off HR topic and talk about whatever I want to talk about. Think barbecue. Space exploration. Technology. Money. Managing. Business. Things that interest all of us.
Voice Over: We get a lot of e-mails with questions. Stay tuned for details on how you can submit yours to the show. And now let’s get started.
Paul: All right. This week’s podcast on What the Hell Just Happened is going to be about AI everybody, we’re going to just scratch the surface. I’m going to bring Angelo in from CEDR. He’s on our compliance team. He’s an analyst and helps us kind of figure out existing laws and things that are coming out that we need to pay attention to. I’m kind of excited to talk about this topic because I think it’s going to have one of the biggest impacts on medical practices, dental practices. Everybody associated with medical I think AI is going to have a huge impact.
And I think that one of the reasons why it’s going to be such a big impact is because it’s going to greatly improve everybody’s experience. From the providers who are having to take notes and get things done, to the clinicians, to the patients that are walking through the door. I just think AI is a revolution, not just in treatment, but in customer care and patient care and service.
So with no further ado, we’re going to get into a little bit of the scary stuff. We’re going to focus a little bit around HIPAA and how you could inadvertently already be committing breaches by letting an AI-powered tool help you write communications to your patients.
Paul: All right. So we’re just going to jump into today’s podcast about AI and its use in medical.
I’m joined today by Angelo. Angelo, would you please introduce yourself and tell people why the heck you’re here? You have a…there’s a reason why I brought you in here. Angelo is from CEDR. He works at CEDR. What’s your job? What’s your job?
Angelo: I’m a compliance analyst and kind of a problem solver.
Paul: You’re a nerd. And my software engineer gets mad when I call him a nerd.
I don’t remember what he said he is. He’s like, ‘I’m not a nerd. I’m something else.’ I don’t remember what it was. But anyway, Angelo, you’re in compliance. You’re an analyst. You help us research laws and rules. Anytime something new comes up, you’re on that team that starts researching the new could-be law. Could be some proposed regulation.
Could be ‘Hey, there’s something new on the horizon’. I want to take a minute to say the last time… Well, I’ll give an example. So when I first started this company, social media did not really exist. And then it started existing 2009, ‘10, ‘11, ‘12. Well, we had MySpace, but it wasn’t a problem. And then Facebook came on and shortly thereafter, a couple of years later, it started to become a problem.
Or maybe that’s not the best way to put it. It started impacting the workplace and we needed to have policies and we could have certain policies and other policies we weren’t allowed to have based on existing law that was already in place. Not about, ironically, social media. The laws weren’t about social media. They were about the ability of employees to communicate with one another in which can and can’t do around that kind of stuff.
So we got a call from or an email from one of our members and he asked, ‘What are you guys doing about AI’? And when we asked for some more details, Angelo, he was like, ‘Well, I’m using one of those transcribing software that records what I’m saying to the patient, and it’s doing an amazing job and I want to use it. And, so I want to know, you know, do you guys have any policies on A.I. and what it means?’ Which brought up some kind of neat things like they’re recording in the room with the patient. Is the patient aware that they’re recording? When that information’s pulled into that software or it’s sent off to an AI system, is that AI system HIPAA compliant? Yeah, and that’s kind of a lot of what we’re going to talk about today.
Angelo: Yeah, big, big questions.
Paul: Yeah, you know, we talked. Angelo and I talked about this. I was listening to the podcast Hidden Brain, which by the way, I kind of love that podcast. So if I could recommend another podcast for people to go listen, that would be one of them. And he had on an amazing A.I. expert and they were just talking about how they, it was already being used in medical. I think we all know the example of, are you familiar with the story about x-rays?
Angelo: I’m familiar that that’s a big that’s a big area where it’s being utilized, right?
Paul: Yeah. Yeah. You know, the x-rays are spotting, you know, well, not just x-rays, but all kinds of scans. The AI’s able to spot things that sometimes doctors aren’t able to see or the tech. What are the X-ray people, what are they called?
Angelo: X-ray technicians.
Paul: Nah, the technician.
Angelo: The radiologist.
Paul: Yes, the technicians take the…yeah. And the radiologists look at the results. So anyway, you know on Hidden Brain, they’re really expanding this, saying that they were excited because doctors are spending a lot of time taking notes and not really paying attention, being able to pay attention to patients. And what they input is very important because it determines whether or not it’s going to get covered by insurance. And so doctors have really become Scribner’s in a lot of ways. And so the system is starting…the systems that being are putting in place, the AI systems are starting to do things like completely transcribing. So they completely envision by the time a patient walks out of the treatment room that their labs could be ordered, that the system may suggest other labs that may not have been suggested based off of all of the visits that this patient’s been coming to work. So I have that problem every time, every now and then. I had I don’t mind discussing it. I had a really big cancer scare not long ago, a couple three years ago, four years ago now, thankfully. And there are certain tests I need to take and I see different doctors and it’s awful to have to go to four different doctors in order to figure out all the tests.
So this system can completely eliminate that. It can just say, ‘Hey, you need this test for this visit, but you’re behind on these other tests and you need to schedule your annual scan. You’re four months behind the lab’, you know, can just bring everything together. If it’s an older patient, an elderly patient, maybe they’re having trouble with memory and stuff like that.
There could be parts in the system which actually communicate with authorized family members or caregivers, saying, ‘This is what we found here, this is what we agreed to, this is what we have to do’. And then I think the last thing I really like was it was going to have a strong ability to follow up on even some of the sort of minor conversations that you have. Like doctor says, ‘You know, what would really help your back is if you lost a little bit of weight. But more than that, I’m going to recommend that you go see PT for 5 visits and learn how to do these stretches’ so when they come back, the system’s already telling the doctor, ‘Hey, check to see if they did the PT stuff’. Yeah, you know what happened with that? How are they feeling about it? So it’s actually helping them with their follow-up, you know, which I think is pretty neat. Things are all pretty neat. But yeah, the at the crux of this though, is this information is being grabbed and sent to servers, right? Yeah. So we wrote a policy about this because he, you know doctor reminded us that you know, this is something that’s upcoming so we’ve already put a policy in place. What do we cover in that?
Angelo: We cover general AI use, I think using Copilot and Being any sort of Microsoft product at an office may be using. May have any Office 365 may have some AI capability built into it. And so it’s important I think, for our members to investigate a little bit. You know, how is the data that we’re putting into this being collected? How is it being used?
Paul: Where is it going?
Angelo: Where’s it going?
Paul: Wait, so what you’re saying is, I haven’t thought about this, Angelo, is that I could be sitting with Grammarly running on top of my Microsoft or my Google Docs and I could be typing patient information in and it could be grabbing that information and sending it out.
Angelo: Yeah, it could be captured on some database somewhere or in some servers.
Paul: Well, I think it’s very likely that it is. Yeah.
Angelo: Yeah. I think that there are sort of the free, the free uses of like Copilot and ChatGPT I’m almost certain that any information you put into there is going to be collected and it could be used in some way to train, continually train these models. It’s when you get into an agreement with an AI company or Microsoft where you’re disclosing to them that you’re a covered entity under HIPAA, that’s when it’s imperative on it’s incumbent upon them to make sure that they’re providing you HIPAA compliance with the software that that you’re utilizing. So that’s when you start getting into business associate agreements. And so if you’re using AI in your workplace and you practice and you don’t have a business agreement in place with whichever software you’re using, that’s a big red flag.
Paul: That is a big red flag. And just for everybody’s benefit, there are many companies out there that begin to recognize this as long ago as like 20 years ago. But most companies built it in, and if they are if they’re going to cover you that way and if they are aware of it, they’re going to cover you that way. They’re going to insert into their agreement with you. You’re going to they’re going to place in some kind of a business associate agreement with it. If you don’t know what we’re talking about with this associate thing, it’s in our HIPAA training. We provide that for, you know, all of the members out there.
Angelo: In our policy, though, like to go back to the policy that we drafted, it’s really the policy we’ve drafted is just really kind of letting employees know that, ‘Hey if you’re using an AI in your workflow like we need to know about it.
Paul: Be mindful.
Angelo: Yeah, be mindful of what you’re doing, run it by management. Let’s all get on the same page as to what’s happening in the tasks that you’re using this AI for.
Paul: I mean, that makes sense right now. That’s the policies that we’re going to have in place and you’re going to start seeing it pop up all over the place in the software and stuff that you’re using. And I think one of my first concerns is if it’s going to pop up and start being used tangentially, you gave a great example what you know with Microsoft Products, sending it out to maybe write it better for you that you have to understand where it’s going, because not everybody who provides AI tools understands or they’ll just disclaim.
They’ll just say, ‘This is for everybody and we are not protecting this information the way that we’re required to underneath the HIPAA rules.
Angelo: Exactly. Yeah. And just because they say that doesn’t mean you don’t have that responsibility any longer.
Paul: So I want to give a, just a couple of very related examples of how someone could easily screw up. Two employees that are tasked with writing some stuff, with cleaning some stuff up, with some communications that are supposed to go out either from, you know, a medical practice or from a hospital. One of these instances is a hospital. What they’re doing is, is one, they’ve got this person who’s been brought on and their task is actually to clear up their communications with their patients input the right thing into the templates that go out as a result of saying your tests are done or this prescription needs to be filled or your visit entailed this or your upcoming visit entails this. Be here at 7:00 we’re going to put you under at 8:00. Don’t, you know, don’t eat don’t drink any fluids and the the issue could be that while writing those templates they’re taking what they’re working on or what they have already and they put it into the AI and say, this is what I want to do with this. Can you improve it? Can you add it? And next thing you know they’re copy-pasting proprietary information into the AI, thinking that they’re working on their computer when what they’re doing is working off of the server somewhere that may not be in compliance. And so we’re going to see you’re going to see this in your practices over and over and over again. Employees who are charged with writing and communicating are going to start using AI tools because frankly, they’re better. They get you to the point. They can be very, very helpful. But again, this is where policy comes into place, where you’re informing employees about that. And Angelo, I think I want to make this before we get out of here. This point, when it comes to HIPAA compliance, the fact that you tried and informed people and that you can show that you told them not to use these tools, to use them in a certain way or get approval is what mitigates the penalties if you get caught doing this or it causes you some kind of, you know, unforeseen problem. The unforeseen problem is that one of these servers gets broken into and somebody knows how to extract the information and boil it down. And the next thing you know, you start seeing your email templates online. I mean, that’s that’s exactly what the risk is there. So, you know, the fact that you took steps greatly mitigates any trouble that you can get into. But make no mistake, just because you told people not to do it and they did, you know, you can still get a lot of trouble as a practice owner.
Angelo: Yep. Yeah. And so like the HHS deals with a lot of these sort of data breaches. And any cover entity is supposed to report to HHS, Office of Civil Rights when a breach has occurred. And so I was looking at the numbers the other day and out of… so they have this Excel spreadsheet that they offer the public which is a rolling 24 month. It’s all of the breaches that have been reported in the past 24 months, 160 of 900 that were reported in the past 24 months are from business associates. So it’s interesting. So like once, you know, you’re working with these companies and you have this business associate agreement in place and you’re utilizing their software, you’re both on the hook, right? And so if you’re using software and you don’t have a business associate agreement in place and the company is refusing to go into one with you, then that’s
Paul: You need to stop using it.
Angelo: You need to either stop using it or look for something else or really try and get the meeting and get the answer why they won’t get into a business associate agreement with you.
Paul: Right, right, right. That makes sense. Okay, Angelo, I mean, we could talk and talk and talk circles around this, but I just really wanted to just get on here and kind of pull somebody in from CEDR and talk about what the hell just happened. What the hell just happened is AI is in the workplace. And I think what I just learned is it’s running in ways that we don’t even understand. And frankly, I’m thinking about what you just said about breaches, is anybody who uses Grammarly, let’s not use that. If someone’s using some program somewhere, which is an AI program which is grabbing things on the page and correcting it for you or helping you write it or do whatever it is that we, you know, like we described, if they are not compliant, you don’t have a business’s agreement with you and you put something on that page that includes your patient’s name, their birth date, and any other proprietary health information, you’ve just committed a breach.
Angelo: Potentially
Paul: You’ve potentially committed a breach and potentially you were supposed to report that. So what I am going to say is that I think we may have thousands of breaches out there right now that we don’t realize that are going on.
Angelo: Yeah. it’s possible.
Paul: Well, I leave everybody with that fearful note. Good luck, everybody, goodnight. [laughs] Sleep well.
Angelo: In my practice of using AI in the workplace because they do use it,, I do what I can to separate my work product from the use. And so I really try and target the use of it and provide only information that is general information that’s going to help me achieve my objective. And, then once I get the output from the AI, then I can, you know, move it to template and do things like that. So there are some steps you can do you can take to, to try and separate your patient’s information. The process of using your patient’s information from the task at hand from your administrative stuff.
Paul: Yeah yeah yeah that makes, that makes total sense. Okay, everybody, what the hell just happening in HR is AI is on the scene, and Angelo and I had a little bit of information, and we have put some policies together, and we’re just watching this as it comes, as it comes along.
Just kind of pay attention. Hopefully, this podcast kind of gave you some places where you need to look at where you’re using AI and how you’re using it. Thanks, Angelo. I appreciate it.
Angelo: Thanks for having me.
Voice Over: Thanks for joining us for this week’s episode of What the Hell Just Happened. If you have an HR issue, question, or just want to add a comment about something Paul said, record it on your phone and send to podcast@WTHjust happened.com. We might even ask if we can play it on the show.
Don’t forget to like and subscribe and join us again next week.