#254 - AI, Privacy, & Security: Insights from (Aimee Cardwell)
===
[00:00:00]
G Mark Hardy: Hey, we're all security experts, right? But you ever have to deal with the problem of privacy. We're finding out that's becoming more and more of a concern in our day and age, and I've got an expert who's gonna share with you her insights on how you can address that.
G Mark Hardy: Hello, and welcome to another episode of CISO Tradecraft, the podcast that provides you with the information, knowledge, and wisdom to be a more effective cybersecurity leader.
My name is G Mark Hardy. I am your host for today, and I have with me. Amy Cardwell, welcome to the show.
Aimee Cardwell: Thank you. It's such a pleasure to be here. Thanks for having me.
G Mark Hardy: having me. I'm, it's a privilege to have you here. I was going through your background and things like that, and very impressive United Health Group, Amex, eBay, Expedia, Netscape, impact Awards executive women, and gee, gee whiz.
It's like I am not worthy to have you on the [00:01:00] show, but awesome.
Aimee Cardwell: You're funny.
G Mark Hardy: It's funny, but it's true. exactly. So you've done an awful lot, but, in your own words, tell us a little about like how did you get to where you are, like right now?
Aimee Cardwell: Yeah, it's interesting. I'm probably the least likely executive you've ever met. I don't have a four year college degree. I started working right outta high school and didn't really even finish high school. For those of you others who, on the listening who didn't do that.
G Mark Hardy: that,
Aimee Cardwell: but I am very curious and I love to learn, and from an early age I've been growing and changing and learning, and that's pretty much what I think all cybersecurity experts have in common.
It's a field that's never the same. So even if you graduated with a 40 year degree in cybersecurity, eight years from that, from then you're gonna have to have all new information because the playing field will have completely changed.
G Mark Hardy: Absolutely. Another proof of g Mark's law, I wrote that over 25 years ago. Half of what you know about security will be obsolete in 18 months. [00:02:00] For all of us who are limping along on Windows 10, say goodbye next, next month or get ready to write big fat checks to Microsoft.
But, yeah, don't feel it. it's interesting, some of us have this little concern about our background, and you look over our shoulders. Had Danny Jengas on the show just a little while ago and he dropped outta school at 15. He's now founder and CEO of a hundred, a billion dollar companies.
He has own unicorn. we can, sometimes we conflate formal education with career success and to a certain extent, formal education is helpful in so far as it gets you conversant in different languages, for example, I do still recommend an opportunity for people if they want to consider something like a business school, particularly if you came up with a technical background because it's gonna expose you to the language of.
Business. But even then from a cybersecurity perspective, we have a nuanced perspective of business and that's on risk and being able to manage risk across our enterprise. And typically we've tended to think about that as being risk across [00:03:00] cybersecurity. But as we want to talk about today, privacy is come up alongside.
And now these two different potentially disjoint sets of the business, we have to start thinking of them more holistically if we're gonna be effective in our leadership roles. Would you agree?
Aimee Cardwell: I couldn't agree more. usually the privacy team rolls up to, let's say the chief legal officer and the security team rolls up maybe to the CIO or sometimes the COO. And so you're looking at organizations in very different parts of the enterprise, and yet the closer they work together, the better the outcomes are for both teams.
G Mark Hardy: And also you have waste and duplication of effort. If I'm gonna try to run two parallel teams, that would seem to be that now. What else do you find other than like a waste?
What type of errors might you see when we start to treat these as two separate silos of excellence, so to speak?
Aimee Cardwell: Yeah, so the privacy team's really trying to think about from a regulatory and [00:04:00] governance perspective, what they're doing with people's data. But from a security team, we're also looking at things like.
G Mark Hardy: like.
Aimee Cardwell: Are, we deleting data as quickly as we should be? So you know, all of the tools that we're trying to use to make sure that we're doing data retention policies and deletion.
The privacy team cares a lot about that too. And generally they have less money than cybersecurity does, right? Cyber tends to get a bigger budget generally, so if you work together, you can actually buy better tools and do the scans once. And make sure that you are, as a cyber professional, reducing your attack surface by having as little data as possible and from the, privacy perspective, they're gonna be as happy about that as you are.
G Mark Hardy: as you are. That's a good point.
Yeah. I think cybersecurity, we get more exciting problems and breaches than you get over in privacy. And so as a result people are like, eh, you're not gonna, the redheaded stepchild there in the organization.
Aimee Cardwell: we also have a bigger threat, like you said. if, if I say, [00:05:00] we've got 15 terabytes of data that could be released, and I can get that down to one terabyte, that's a big risk and a big threat. But if the privacy team says, we're gonna get hit with a regulatory fine, that's either gonna be a million dollars or $1.1 million, the business is gonna be like, yeah, not that interesting.
G Mark Hardy: And it's interesting though, because you mentioned that, so let me ask this question about risk and looking at where our potential, threat actors are typically there's a whole school of debate out there as to when example an organization has a data breach, and particularly if it involves.
Compromise of privacy or PII or things like that. And then regulators are very fast to break out their, little green shade and tally it up and said, ah, you owe this huge fine. They're thinking like, wait a minute. It's You think a guy reports to the police? Hey, I just got mugged. How much did they steal from you?
They stole all my money. 200 bucks. we're gonna find you $50 for getting Ugg. It's what, I'm the victim here. Oh no. You should have known better. You should have known that there were gonna be bad people out [00:06:00] on the street. there's always bad people somewhere on the street, but I know exactly where they are.
It's not our problem. We need money for the policeman's ball. Pay up. So that's a humor's way of looking at it. But from a privacy perspective, some of us look over that and they go is that really, do we have to add regulatory entities to our threat actor list as well, or am I just looking at that incorrectly?
Aimee Cardwell: I don't think you're looking at it incorrectly, but I do think there needs to be a penalty for companies who don't take preventive action to protect people's data.
G Mark Hardy: data.
Aimee Cardwell: if
G Mark Hardy: Oh, sure. somebody who,
Aimee Cardwell: then, maybe we would all just leave our data unencrypted and we would never delete anything. I think they're trying to punish companies who maybe didn't take all the steps that they could have to avoid getting robbed.
If you put your wallet, out. Out on the unzipped back pocket of your backpack and it gets rubbed. I'm having a little bit less sympathy for you.
G Mark Hardy: for you. Yeah. You're asking about that Then, we have the, toss wallet is if you grew up in a rough neighborhood, we have a second wallet, which has some [00:07:00] expired cards, and there are a couple ones someone tries to mug you like.
State the money, throw it and they're like, whoa. Money. And by the time they figure out it's, yeah, you're already three blocks away. I was reading a little bit of something about what they look for when you're gonna mug somebody. Not that we wanna get into mugging on this one, but if you're gonna go out in some weird neighborhood, wear running shoes, make it look like you can chase 'em as compared
Aimee Cardwell: Oh, interesting.
G Mark Hardy: And
Aimee Cardwell: a little like a honeypot, isn't it? The the fake wallet.
G Mark Hardy: wallet. Exactly. and, so sometimes we, I always thought, why don't we do that with PII, like we stole all your data. It's oh dear. Yeah. It's all fake data. because we just set it up here and that was the easiest thing to grab.
And then we decoy folks, but we, talked a little bit before about. Security, privacy, the fact that we may be duplicating our efforts that SEC cybersecurity today might be getting more budget because of the awareness factor. Privacy does become an important element of running our business responsibly as well as being enforced on that from the [00:08:00] regulatory agencies.
But there's actually. a new player in town and that's AI and artificial intelligence can work really well to help us on the defense side, but it's also democratizing the access to more sophisticated tools and methodologies for those who really don't understand how they work or how to write them.
But they can push buttons and they can talk. So what are we seeing then potentially over. Problems with regard to privacy and the advances of AI because of that disconnect between IT security and privacy?
Aimee Cardwell: Yeah, it's a great question, and I think about this question from the business perspective. If you're only looking at, as a CISO, you might say, we're gonna shut down the use of AI across the business. But I think from a business perspective, that's probably not a good idea. It's not, you don't wanna be a late starter to the AI game.
So I'm trying to think about how can I enable the business and enable the experimentation [00:09:00] with AI that I know is the right thing for us to do, while also protecting information. And there are a couple ways I think about this. The very first thing is data discovery. As with anything CISO related, if you don't know where the data is, you can't protect it, and it's just as likely that it's gonna end up in some AI somewhere that you're not aware of.
So that's the first thing I think about,
G Mark Hardy: Interesting. So knowing where our data is, does do we typically. When we classify information, we look at its business criticality. I think our understanding now, it's pretty well understood that things like PII or anything that could be treated like that. Also payment card data, protected health information, we have whole different categories of things that need to be protected.
Do we need to rethink governance at this point based upon the AI threat, or can we just simply try harder what we're already doing and expect to have better results?
Aimee Cardwell: Oh my goodness. You couldn't have set me up better for the second half of the AI bit. So I think of it in three, three [00:10:00] ways, three categories, just like you said. For public information. and, I list these in most of the AI policies that I've written lately. And I say if you are writing a job description, changing your marketing material, doing anything that's gonna end up public or doesn't have anything, company confidential, or of course pi I in it.
Use any AI you want, go nuts, use whatever tool, please go experiment. That's the place for you to play. And then if you have company confidential information, let's say source code, I would want you to only use AI platforms with which we already have an agreement. So if you're using Cursor and you have a, relationship with, chat GPT or Claude, or pick whatever tool you want.
Great, knock yourself out, but don't use any tools for company confidential information that you don't have, an enterprise contract with. The most interesting one for me is the sensitive information category because what I wanna say is [00:11:00] don't ever put sensitive information into an ai. And the reason I wanna say this is because I have found almost no use cases where the sensitive information can't just be de-identified first and then used to train the model.
And so if you need to train a model with sensitive information. Deidentify it and then train the model to your heart's content. Otherwise, as we have to set up an internal model and make sure that there's no, possibility of that information leaking, and that's just a lot harder than de-identifying in my opinion.
G Mark Hardy: Yeah, and, that was my thought a couple of years ago, and again, I got too many things going around. I need a whole bunch of people saying, what do you wanna do next? You go like Halo sharks. What we have here is a little proxy that you go ahead and you feed all your stuff in there.
It automatically identifies sensitive information. Puts in the substitute, does all your external stuff that comes back instead of Joe, it's Tom and not Nancy. It's, Sally and, this number and that number. And then by having that custom interface there, you're gonna be able to [00:12:00] identify it. So if somebody's not doing that already.
There you go. There's a business idea.
Aimee Cardwell: Yeah, I don't know if there's an auto de-identification tool, but I don't think it's that hard generally to de-identify a bunch of data. I'm pretty sure anybody with a really solid pearl script can take care of
G Mark Hardy: care of. Yeah, we can vibe, code that pearl script too. So we're, off and running.
Everybody's Pearl scripts are just like, so two thousands today, everything's gonna be done in Python. Okay. Or rust or something like that. Anyway. So we talk about things and of course, going forward we look at AI as being a challenge. But one of the difficulties is even if we started re-architecting everything starting today, we've got.
A huge amount of legacy data. Usually it's gonna start, it starts out at a hundred percent by definition. Then it fades over time two ways. Either we generate lots of new information, at which point it gets smaller, or as you had alluded to a bit earlier, we delete. Old information that has no longer any good business use but could pose a [00:13:00] liability.
And that's a discussion I have frustration with when I work as a CISO and trying to convince executives who think, we might need it someday. I said, it might bite you someday. Think about information like stuff in the back of the refrigerator. You don't wanna see what it looks like a year and a half from now.
You probably should get rid of it if you're not gonna be utilizing it.
Aimee Cardwell: We should partner up and go on a national campaign about that, because honestly, I've found data y one of the most interesting, cybersecurity leaks that I cleaned up after was that there was patient data stuck in the invoice folder of a company. And you're gonna say, why was there patient data in the invoice, folder?
You wouldn't think about them. of course they. Bill, the healthcare companies and one during the bill, they needed to say, these are all the services that we provided to all of these patients,
and that was in the invoices. And so they had 15 years of data in the invoices folder. I would challenge you that you probably don't [00:14:00] need more than three years worth of back, back invoices, say.
G Mark Hardy: Yeah, it's really hard to collect on something much older than that. Hey, you forgot to pay this back in high school. You want be, yeah, But there, there's people who, they want it then P-C-I-D-S-S sometimes, so there's some requirements that you're have seven years that requirements.
It really comes back to the regulators. So you've got different forces here. Again, we're a little down a little on the side, but let's talk about it. So we look at data retention. We have a couple forces going on. One of course is regulatory. So if we're doing payment card data, PCI. DSS, I'm gonna have a requirement there.
If I have a federal regulation or some other state regulation that says I have to do this, I'm gonna comply with that. But remember, compliance is not necessarily an achievement of excellence. It basically keeps the lawyers happy and it keeps regulators off your back. That said, beyond that point. The real business criteria is how much value can I extract from information that I am retaining past its [00:15:00] mandatory retention date?
And if the person who says I want to keep it, if we calmly push back as a CISO and said, hey, a boss, please tell me three ways that we have made a profit with information that is older than. For years, for example. So I know that how valuable this is, you're probably gonna get, either a hostile pushback or a light bulb will come on and said, I guess it's never happened.
I said, statistically it is never gonna happen, but. What's the possibility of getting served with a discovery notice of saying, Hey, an intern that worked with you five years ago is now in some sort of legal situation and we want every email, this person ever wrote, every da, and you still got 'em and you don't have 'em index correctly.
So you're gonna spend hundreds of hours digging things out on a case that you don't have a dog in the fight and you don't have any interest in it. and yet you're liable as compared to being able to go back and say, Hey, you know what? Here's our document retention policy. It's been operating for years.
we deleted all that three [00:16:00] and a half years ago per policy. That has been enforced for five years, your Honor. we don't have it. And it's not destruction of evidence. It's just we've been following best practices at which point the other side has said, yeah, sorry. Go fish.
Aimee Cardwell: Mark. I learned that lesson, probably 35 years ago when I was at Netscape, because as you may remember, Netscape was sued by Microsoft and it didn't have a document retention policy, or at least not one that I knew of at the time. I was a pretty low level employee back then.
G Mark Hardy: back then,
Aimee Cardwell: But literally every email that we ever wrote that said nasty things about Microsoft, all of that was admissible a hundred percent of it.
All of the little chats that we had on our supposedly private chat rooms about what Microsoft was or wasn't doing and, all of our internal feelings about Microsoft, all of that was admissible. It's just not a good idea to keep that stuff around to, that end, I have had some success. For those folks who are like, no, there's a possibility that one day I'm going to use that data and make a billion dollars.
[00:17:00] I've said, great. Let's put it into cold storage, into a deep archive. And if it sits in there for more than two years, I'll delete it. Is that okay? And that has actually worked out pretty well because two years later, they've totally forgotten about it and are like, what are you talking about? And then I just delete it.
G Mark Hardy: it. Yeah, it's, in the military, it's called Command by negation. Unless you say otherwise, I am gonna do such and such at this time. If the boss is okay with that, 'cause she's got so many other things going on, it's oh yeah, whatever. Or you have a high degree of trust and then yeah, I trust you.
At which point you don't have to get permission. Hey, can I delete that 2-year-old? wait a minute, and now comes a pack rat again. So your best to say, Hey, I, we agree that within two years, if haven't called for it, it goes away and things like that.
Aimee Cardwell: I've now noted that technique as of a future technique for me to use. Thank you very much.
G Mark Hardy: much. There we go. We're, learning from each other. Who knows? You got Mike pc, the Amy and g Mark, in incorporated helping people out of problems. we think about getting people outta problems. We're talking about visibility of data, really, and [00:18:00] what can we do to improve our visibility of data, particularly if we're talking about legacy and not all The legacy is necessarily in the same format of what we're working with today.
Having helped you if you're in a file cabinet full of paper folders, that you have no visibility into that. but more realistically, how do we get our head around this problem?
Aimee Cardwell: There are a number of amazing data discovery tools, in the marketplace that have sprouted up in the last five years or so, and they work in the same way that like Vanta or Drta do. They have little connectors that connect to all of your different data stores. They'll look for Salesforce data. They'll look for data that's in some old, database that you forgot about.
So they'll look through your whole network and find sources of data so that you know now you know where they sit and you can actually see whether or not they're being touched and try to figure out how to close down as many of those databases as possible. But if nothing else, at least you can see where they are.
Again, [00:19:00] think about GDPR and a, a customer's right to be forgotten. If you don't even know where their data is stored, how are you gonna comply with that?
G Mark Hardy: Unless we answered, we forgot where we stored it. Does that mean it's a right to.
Aimee Cardwell: I don't think that's gonna hold up, but you can give it a shot.
G Mark Hardy: Yeah. for people in the US they think a right to be forgotten. That mean all that pictures of the stupid stuff I did in college, I can delete it when I'm applying for jobs. No. The right to be forgotten means, for example, I do business with the company.
at the end of the transaction, I buy, something, they sell it to me, they got my client data and I say, you know what? I'm not gonna do business with you anymore. So they say, we have complied with GDPR, with your right to be forgotten. Next month I read about this company having a data breach. I'm not too worried about it because if they are in fact following what they said they were doing, my data has already been deleted and I'm off the hook.
And so it's, it's popcorn time. We want to see what happens, but we're not in that game.
Aimee Cardwell: but also I think you and I, having seen and worked with very [00:20:00] large enterprises, I'll use a UnitedHealth group as an example. They acquire two new companies a month, every month. So CISOs used to think about perimeters. Perimeters don't mean anything when you keep throwing two new companies in every month.
But my data might be at the podiatry office that we just bought. My data might be at the dentist's office that we just bought, and so now that it's owned by United Health Group it, they're still responsible for removing the bits of data that belong to me. Healthcare's a little bit different because as you said, there's some regulatory requirements about storing patient data, but you can think about that for any company that might do a lot of acquisitions.
G Mark Hardy: Yeah, that's a good point. And, things age out over time. Although, if someone I. Don't know how much of a case they would have if someone said, Hey, these are my medical records from 35 years ago. I don't want them disclosed. so what, who cares? but who knows? and I'm not an expert and, most of us probably won't have to worry about that, but, let's get outta the [00:21:00] past and back into the future again.
We talked about, but what about ai? And so what we found out is that we, look back in our history, so you know, you're talking about Netscape and that's where we're first starting to go ahead and communicate as compared to, hey, everything is local, standalone. And then we had oh wow. Novell network.
I can actually connect com, a couple computers together and just run this cable around here and clamp a little vampire, taps on, this, ethernet cable. but What comes now is that with ai, if we think of the errors we might have made in going out to the cloud governance problems, things that we hear about S3 buckets being unprotected, a major compromise has taken place because the cloud hosting organizations are following the directions of the client.
The client did not. Enforce or choose to enforce or even might not have been aware of best practices. And so yeah, there it is. It's out there. So we've had a lot of problems in the past. How do we avoid those problems [00:22:00] going forward with the way we roll out ai? And can we directly apply those lessons learned or are we in a Tara incognito and we gotta figure it out ourselves?
Aimee Cardwell: I love that you drew the parallel between, moving to the cloud early and what's happening with ai. And I draw those parallels all the time because I say. Remember the companies who were afraid to go into the cloud and instead they built their own private cloud. And then they tried to do a hybrid cloud, and then they tried to do a cloud, agnostic cloud.
And so they were moving slowly because of their fear. And there were other companies who were like, I'm leaving the data center behind. I'm going headfirst into the cloud. The companies that did that, that went headfirst into the cloud actually fared better. They spent less money and a lot less effort.
and their data got cloud native faster. Their data and applications got cloud native faster. And that's why earlier I said, I really want everybody in the company to be playing with ai. I really want to encourage people to [00:23:00] use it. 'cause if you end up being the last company on the block who has an AI.
Literate workforce, it's not gonna be very good for your business. That said, up until recently, that was pretty dangerous because you're expecting Bob and accounting or Joe, the customer service rep to do the right thing by your policy. But there are some new data applications out there. I think one was actually just bought by Sentinel.
One that,
G Mark Hardy: One
Aimee Cardwell: looks at all of the things that people type into AI prompts and automatically redacts sensitive information. Have you seen any of those in action?
G Mark Hardy: I've heard of it. I haven't actually played with it, but,
Aimee Cardwell: So good. I learned about this from, my board role actually. I watched it in action and I was shocked. Just shocked. Literally, company sensitive information was redacted.
PII was redacted. It was really impressive.
G Mark Hardy: I like the fact you said shocked. 'cause here's the next business idea. We're gonna connect a cable from the PC to the user seat so that when they put in the information and the ai, they get a voltage through them.
And [00:24:00] you're gonna be sitting there, here yow, Yelp and around. And then after a while, the noise should die down as people figure out what they should be doing.
Aimee Cardwell: Pavlov is applauding you right now
G Mark Hardy: And, yeah, as long as we don't do a Stanley Milgram, in any case, for those who are not familiar with it, go look it up. We're having so
Aimee Cardwell: or don't.
G Mark Hardy: jokes or no. Yeah, we're having so many inside jokes on this one. So as we take the lessons learned from cloud and we apply to ai, we realize that. Going fast at first, we're concerned with ai.
Now we think maybe, take the foot off the brakes a little bit. We could go, it doesn't look as bad, but at the same time, it's important to understand what guardrails have to be in there. And I think we've articulated some of them. First of all, if you are not paying for the product, you are the product.
And so write that big, giant, $20 a month check. If you're gonna doing anything at home, it's worth it, trust me. Okay? You spend less, a lot more money on it. On coffee through your budget for the month [00:25:00] and then secondly for your organization, get an enterprise license. We've been back and forth, with my clients doing that and we've settled out on a couple different AI models and we said basically, here's what you can use 'cause we're paying for it.
Here's what you cannot use, which is the rest of the universe. If you're using your own personal stuff, that's fine. That's, you just be aware of the fact that you don't get those protections. But short of being able to put either the de anonymizer, or anonymizer in de anonymizer going back out or the transition and things like that from going from something known to something abstract, but it'll still function.
those are guidelines that I'm offering to organizations right now. What else are you offering in terms of things that you put into your AI policies?
Aimee Cardwell: Those are all the things that I have in my policies, but I do supplement that with AI literacy programs. So in the same way that we have, phishing education programs and many companies do those monthly or quarterly for most of my clients, we're doing AI [00:26:00] literacy programs as well. Both to offer ideas on how best to use AI because again, I want AI literacy and the company as quickly as possible, and of course, also to say, here's what you should do, here's what you can do, and here's what you cannot do.
G Mark Hardy: That
Aimee Cardwell: is also protective in a court of law. So you know, if somebody does make a mistake and you get busted for it, it's not that hard to say yes. But that person has taken four different AI trainings that contained these in pieces of information. So I think it's both healthy and also protective.
G Mark Hardy: and even want to add a little bit of belt and suspenders to the protection after you say, Hey, here's our AI policy. Okay, fine. I agree. I'll think of it. Great. Here's a quiz. What? Here's a quiz. What? It's a written, it's open book. All right, do it. Grade it.
Guess what? Okay, good. You got a hundred into your HR folder. Next year quiz. Now four years later, something blows up. And of course, the lawyers on the other side are not really interested in necessarily getting the truth. [00:27:00] They just want to cash out. But then you've got some pretty good resistance. Say, it's not the company's fault, this person.
Yeah, but you're responsible. we have a policy. Yeah, but he didn't know the policy. actually, he signed the policy, but he didn't understand it. he took a quiz. And he got a hundred. Yeah, that was, then he took it again, and then he took it again. Then he took it again. What you're doing is you're building up enough that it's gonna say, there might be an easier way to make money than go after that.
And so nothing's perfect because ultimately if you're up against the other side whose lawyer plays golf with the judge, eh, all things are up in the air. And they used to say, if you think going to court with the, best lawyer is expensive, try going to court with the second best lawyer. As we look at these things going forward, and we've talked about ai, governance and some insights for that, and I think those are excellent ones that you've offered.
So thank you very much for those. what we look then is that. As we said, don't use customer identifiable data to train your AI models. if you're gonna use a private model, a LLAMA base, [00:28:00] there's plenty of open source models that are coming down and we're getting to the point where I've even read some people have got some stuff that'll work on a Galaxy phone, A LLM that'll do something.
But also I think for those of us dealing with ai, there's just something yesterday in Wall Street Journal. Of course it's not gonna be yesterday when people watch this show, so you have to go look it up. But they had five different DeepFakes. And you had to listen to the deep fakes and say which one are lives and which ones are bad.
And then, what was your score? And I got all five of them. And it was
Aimee Cardwell: I'm impressed, by the way. That's
G Mark Hardy: By the way, I was too, I was surprised. But at the same time, I can, I've got this AI detector that definitely works in terms of written stuff. I had worked with one guy who used to send back one word answers on his email, and then all of a sudden outta the middle of nowhere.
whereas is it important to consider all the elements that are involved?
Aimee Cardwell: And they're bulleted. All the emails are bulleted with little sections.
G Mark Hardy: at the M space. Yes. That the m space, this tells, so if I do use AI and [00:29:00] I, write my episodes myself, but I will go ahead and use AI for some research to go get stuff.
But I have found out it hallucinates and it does it a lot. I was looking at one particular, talk I'm, doing for a bank next month, looking at financial regulations and it's a European bank and they say, what. Things or they complied with when they come up with N-Y-D-F-S, it's no, I need something that operates over there. So they gave me some po, some documents I was not familiar of, and I said, okay, give me chapter in verse. So this is chapter, section three. This is section four. And I downloaded the documents, did a thing, and it was not those sections. So I had to
Aimee Cardwell: Oh, you even asked it for the reference and the references were fake.
G Mark Hardy: the reference.
the document was there. but it misquoted the,
section. So I had to use those magic words, think harder, and then with Think harder, it cranked away and then it came back with the correct link. But what I caution people is a couple things. One is don't outsource your research. Consider it to be a very [00:30:00] eager but very junior assistant.
Somebody who is going to try hard but is. to be polite, a low potential high achiever, somebody who works really hard to get mediocre results. And, we might have hired people like that. But in any case, in the AI perspective, it's trust but verify and the emphasis on verify, because. These models, if you understand how they work, they're basically predictive or they're just gonna to one token after the next token, after the next token.
Typically, when we're doing language models, tokens are words, and we just pick the most common word that's likely to come. So it sounds then, so I say peanut, you go peanut butter. Peanut butter, and peanut butter and jelly. Peanut butter and jelly sandwich. But if I restart the sequence, give me the next thing peanut.
Peanut allergies, peanut. And then what comes, oh, peanut allergies may be a problem for young children or for people on airplanes. And then if you keep on going and keep going, peanut, the squirrel was killed by the New York Department of A, whatever. And then after a while, you get to the things where you go, [00:31:00] wow, it's creative.
It came up with such an amazing idea. You know what? These things aren't creative. It just
Aimee Cardwell: glued a whole bunch of words together.
G Mark Hardy: It glued 'em together, that based on low probabilities, that's all it was. Left. And it just happened to make sense. It's like a, a chess player would throw away all the nearly infinite moves that you could make for a handful that do make sense.
but we find out that this will do things that don't always make sense and what it cannot do. What I've not yet seen an AI model do, come back and say, I don't know. I would love for it to come back and say, I don't know.
Aimee Cardwell: That's a great question.
G Mark Hardy: question because it's like 0.0001, Hey, I've got a piece of data there.
Let me go fetch it and come up and I'm gonna serve it with the same confidence level that I would serve. Something that is almost obvious. So that's the danger there is you'll always get an answer back and things like that. So you mentioned about AI
literacy and so it's important that we communicate not just to our technical leadership.
But to business leadership across functions, HR, for example, [00:32:00] looking at manufacturing or sales or whatever, these folks have to get enough about AI that they can then understand that it's not a magic. Bullet. It has great capabilities, but built within those capabilities are the seeds of its own failures, which is it?
it's forcing to give you an answer, and that forcing function is not based upon probabilities. It's based upon a requirement that you get some answer back. We may change for that level. So what education do you recommend, both for CISOs? Of course we're gonna start out 'cause we may. Do we need to be the agent of change?
Should that come from hr? Should that come from it? where should that agent of change come from in terms of making ai Literacy an important element of an organization's culture.
Aimee Cardwell: remember when we started this conversation, we said that, CISOs have a big hammer, right? So we have the ability to influence the whole organization better than almost any other part of the organization. So if we say, look, we're [00:33:00] already doing phishing training and we're gonna add AI literacy training, lit training to that training. people are gonna be like, is it gonna cost me anymore? I'll be like, it's a little bit more, but not very much. They'll be like, great, okay. Make that make it happen, because you already have a channel for this. Whereas if the AI czar of your company, if you even have one, because in many companies the AI oversight is very disaggregated.
that's, you're never gonna get AI literacy in the company. So I don't pay a lot of attention to, does that fall under my job title? I pay a lot of attention to, is this a thing that I'm. Suitably placed to help the business make progress
G Mark Hardy: on.
Aimee Cardwell: So I, wouldn't say that AI training necessarily falls under this, but I would say if you have the ability to be the, driver of it, you should make that happen.
G Mark Hardy: That's a very good point. And I, I think, although I wanna work an organization where I get to hold the hammer instead of having it held over my head, but, so that, that'll help out other thoughts that you [00:34:00] have.
ultimately, when do we know that we've got AI risk under control? Is it ever under control or
Aimee Cardwell: do you have any risks under control ever, any of them?
G Mark Hardy: To, okay. Let me, refine that term. AI risk to an acceptable level where we all accept a certain level of risk.
Aimee Cardwell: Yeah. with the tools that I see coming out now with the redaction of data that people put into ai, I'm starting to believe that risk can get a little bit easier, at least from the.
G Mark Hardy: the.
Aimee Cardwell: Distributed problem of having all of your employees have access to ai. initially I think what we did was say we're just gonna shut down the connection between any work computer and any ai.
But as we've already discussed, we also don't wanna shut down people's ability to use a tool that everybody else in the world is using. So that connection of tools that's finally starting to catch up reduces that risk. For me, the next big risk is. People who mean but make big mistakes [00:35:00] like dumping a whole PII database into an AI because they're trying to draw conclusions from it without doing good de-identification.
So I think that's where a lot of my focus is right now is how do I create a really lightweight governance board and make sure that folks know when they should come to that governance board for assistance.
G Mark Hardy: Wow. I think that's really good insight. other thoughts that you might have in terms of words of wisdom for CISOs or people who are aspiring to become CISOs?
Aimee Cardwell: I come from a technical background like many CISOs, and it's taken me a very long time in my career to realize that the softer skills are just as important, if not potentially more important than the technology skills. And so the lesson that I always try to impart is relationships across the enterprise matter.
And you should really, start making sure that your. Investing as much time on making solid relationships [00:36:00] across the leadership team early before you need them, because when you do need them, you'll be really glad you did that.
G Mark Hardy: That's really good insight. any last thoughts before we wrap up here?
Aimee Cardwell: This has been a great conversation. I love how aligned we are on so many of these thoughts.
G Mark Hardy: thoughts and I have enjoyed this incredibly well. So for our watchers or listeners out there, this's been Amy Cardwell and we've been talking a lot about AI governance, looking into different ways of, Getting our hands around privacy as it compares to security, and there's sort of an intersection of all three of those and it's emerging.
we're gonna see how things go and it's gonna change over time. if you follow us on CISO, Tradecraft on LinkedIn, great. If not, you're missing out because we do more than just podcasts. We also have a good steady stream of posts that are high. Information, low noise, good signal to noise ratio, a Substack newsletter, as well as other programs, for example.
of course this is gonna air after, I record this, but I'm hopping on a plane this afternoon to fly to London. I'm teaching a half day course on cybersecurity [00:37:00] leadership, for 44 con. Under the CISO Tradecraft label. Hopefully that'll work out well and we're gonna bring this back to the US and we're gonna make some more training available for people out there so you can get better at your CISO Tradecraft.
So thank you very much for being part of our show, Amy, thank you as always for what your contributions are to the community and to the industry, as well as being part of CISO Tradecraft. And, for our listeners out there, this is your host, G Mark Hardy. And until next time, stay safe out there.