Episode 7

full
Published on:

10th Jun 2025

The Future of AI in Security: Enhancing Human Capabilities and Ethical Considerations

Join us as Florian Matusek, Director of AI Strategy at Genetec, takes us on a journey into the evolving role of artificial intelligence in the security industry. Florian and Steve shed light on how AI has transitioned from its early days to today's sophisticated generative models. Emphasizing AI as a tool to augment human capabilities, rather than replace them, Florian advocates for terms like "machine intelligence" and "intelligent automation" to better reflect AI's practical applications. 

Steve and Florian address crucial topics such as responsible AI regulations and ethical data usage, drawing comparisons to frameworks like the EU AI Act and GDPR. They stress the importance of self-regulation and ethical standards in deploying AI technologies, especially in security applications. Listen as we review some valuable insights into AI's future in security, setting the stage for future episodes that will continue to explore the implications and advancements of this dynamic technology.

About our guest:

Florian Matsuek is an expert in artificial intelligence strategy, currently serving as the Director of AI Strategy at Genetec following the acquisition of his venture, Kiwi Security. With a rich background that began at Skype as a video expert, Florian has become a prominent figure in video analytics and AI, recognized for his contributions through an Amazon global bestseller, "Nowhere to Hide," along with numerous scientific publications and patents. As a sought-after public speaker, Florian has been featured in outlets such as Wired and the LA Times and hosts the podcast "Video Analytics 101." Holding a PhD in information processing science, his expertise is deeply rooted in media computer science and computer vision. Florian also plays a vital role as a member of the AI advisory board for the Security Industry Association (SEIA) in the US. Known for his engaging discussions on AI's role in augmenting human capabilities, Florian emphasizes the importance of ethical considerations and practical applications in technology.

Connect with Florian on LinkedIn


Chapters:

[0:00:05] Artificial Intelligence in Security Technology 

Our guest Florian Matsuek discusses the historical evolution of AI in the security industry, emphasizing the need to view AI as a tool that enhances human capabilities rather than replaces them, and advocates for terms like "machine intelligence" and "intelligent automation" to better describe AI's practical applications.

[0:08:11] Advancements in Video Analytics Technology 

This segment explores the practical integration of AI in video analytics, highlighting the importance of selecting the right technology for real-world applications and managing user expectations by focusing on outcomes rather than hype.

[0:19:21] Responsible AI Regulations and Transparency 

Florian underscores the importance of proactive self-regulation in the AI and data privacy sectors, drawing parallels to the EU AI Act and GDPR, and advocates for ethical standards in AI deployment to build trust and maintain human oversight.

[0:27:33] Navigating Risks and Opportunities With AI 

The conversation addresses the issue of hallucinations in large language models and emphasizes the importance of user education and careful implementation of AI to avoid risks, highlighting the transformative potential of generative AI in solving specific problems.

[0:33:36] The Future of AI in Security 

This chapter concludes with a forward-looking discussion on the evolving role of AI in various sectors, emphasizing its continued significance in technological advancements and future podcast episodes.


Resources:

Watch Florian’s podcast “Video Analytics 101”

Order a copy of Florian’s book Nowhere to Hide

See more information about Genetec AI

Read Genetec's "State of Physical Security Report"

Read the EU AI Act

Read the GDPR (General Data Protection Regulation)

Find out more about Security Industry Association (SIA) - Information. Insight. Influence.



Meet your host Steve Kenny: Steve has spent 14 years in the security sector undertaking various roles that have seen him take responsibility for key elements of mission critical, high profile projects across a number of different vertical markets. For the last several years, Steve has focused his attention on how technologies can best complement day to day operations and specifically address operational issues by supporting the A&E consultant community across Northern Europe. Steve is a committee member for ASIS International focusing on Education for the security sector and the UK technology advisor for TINYg (Terrorist Information New York group).

Connect with Steve on LinkedIn

More about Axis Communications: Axis enables a smarter and safer world by creating solutions for improving security and business performance. As a network technology company and industry leader, Axis offers solutions in video surveillance, access control, intercom, and audio systems. They are enhanced by intelligent analytics applications and supported by high-quality training. Axis has around 4,000 dedicated employees in over 50 countries and collaborates with technology and system integration partners worldwide to deliver customer solutions. Axis was founded in 1984, and the headquarters are in Lund, Sweden.

Find out more about Axis Communications - Innovating for a smarter, safer world

https://www.axis.com/

Transcript

00:05 - Steve Kenny (Host)

Welcome to today's episode of Security Tech Talk. We're excited to have Florian Matsuek with us, and Florian is an industry leader in terms of artificial intelligence and currently serves as the Director of AI Strategy at Genetech, after the company's acquisition of his venture Kiwi Security just after seven years ago. Florian's journey began at Skype as a video expert, and he's since then become a recognized figure in the world of video analytics and artificial intelligence. He's authored an Amazon global bestseller, Nowhere to Hide, and numerous scientific publications and patents in his name. As a public speaker, Florian is featured in many outlets like Wired and the LA Times, and he's also the host of another podcast called Video Analytics 101.

00:55

With an education in media computer science, computer vision and, more latterly, a PhD in information processing science, Florian is deeply rooted in artificial intelligence and security technology as a whole. He's a member of the AI advisory board for SEIA, the security industry association in the US, and I've had the privilege of sharing the stage with Florian on a number of occasions, as either a keynote speaker or as a panelist at various international security events. I'd like to add, whilst it's not relevant to today's conversation, a fun fact, Florian was also a child actor. Yeah, so not relevant to today, but hopefully it'll provide some fun for the audience and we'll be spending our discussion today focusing on artificial intelligence and the future of security technology and the future of security technology. Florian, it's an absolute honor and I'm delighted that you've taken the time to join us on today's podcast.

01:53 - Florian Matusek (Guest)

Well, I'm super happy to be here. Thanks for having me, Stephen.

01:56 - Steve Kenny (Host)

Almost every podcast that we've spoken about previously in our different speakers. Everyone at some point has referenced artificial intelligence and the impact that it's going to have on the security industry, and I am always mindful that when we speak around artificial intelligence, if I'm a consumer, I'm led by what Hollywood is presenting to us. No longer is the biggest threat going to come from an alien invasion. The biggest threat is going to come from artificial intelligence. So please provide some context on artificial intelligence and what it actually means for us in the security industry today.

02:35 - Florian Matusek (Guest)

a bad choice of words in the:

03:17

Just two years ago, I would have told you artificial intelligence is detecting people or vehicles. Today we mean chat GPT. It comes down to just being a tool. Right, it's a technology that we can use to create something or to achieve an outcome. But artificial intelligence by itself is not a threat and it's also not nothing. It has great potential and great opportunities, but in the end, it's a tool for us and we can choose how we use that tool and how we can leverage it.

03:48 - Steve Kenny (Host)

Yeah, I think as a concept and I think you touched based on it there, in terms of the sort of the capabilities. I find it really interesting in terms of how we as the security industry have marketed the sort of the overly used term artificial intelligence. Do you think that what we see is genuine artificial intelligence in terms of what we as security professionals and security technology organizations? Do you think we're misusing the term or are we using it correctly?

04:18 - Florian Matusek (Guest)

I think we're doing the same thing that people in other industries are doing as well. So in the end, we're jumping on the bandwagon. But I'm not sure that it's a good idea, to be honest, because if we talk about intelligence, then immediately we think about humans, the way humans work, the way operators work. So you could think that if you use an AI system in the security industry, that you could just replace humans and operators. But that's completely not the case. It doesn't work this way and it's also not how the technology should be used. The technology should be used to augment the human's abilities, essentially to turn regular users into super users. That's really the point, and not to replace anyone, because it's just not possible. It's not the same type of intelligence that we mean when we talk about people. So if you want to use the term intelligence, maybe a better term would be machine intelligence, to highlight that it's something completely different than what we humans do.

05:14

And what we at Genentech like to talk about more is we like to focus on the outcomes, right? So technology is one thing, and we're all excited about technology, we all like it a lot. I mean, I can talk for hours about AI, but in the end, what matters is what we deliver to the end users. These outcomes, and AI is just one part of it. It's one tool in our toolbox that we can combine with things like automation, smart UI, smart UX and this whole thing, and this is why we'd rather like to talk about intelligent automation. How can we take AI and do something useful with it?

05:50 - Steve Kenny (Host)

So it's interesting that you talk about outcomes. I had a really interesting discussion last week with not a traditional consultancy where I've seen historically, where people have spoken around video surveillance technologies and instantly go into the tech. They'll talk about frame rate, they'll talk about resolution, they'll talk about pixel density, and these consultancies are now focused on the outcomes. They do not care on the journey, they just want to know what is this solution going to deliver for me, and I think this has changed the conversation and has also changed the conversation and I guess you've seen that within Genentech as well.

06:25 - Florian Matusek (Guest)

Yes, absolutely. We see that at Genentech as well. Ultimately, the operator that sits in front of the screen, this person, doesn't care about how this outcome is achieved, how we, as manufacturers, can do something that he can eventually use that makes his or her life easier, and that's really the technology. What they care about is how can we make their operations more efficient, how can we help them do their daily tasks and that's what it's really about. And it doesn't help any end users if they get a marketing material from us where it says big AI on them, because it eventually doesn't really mean anything. What they should care about is how can we deliver something that's really useful for the day-to-day lives, and that's what we try at Genentech to focus on these outcomes as I said.

07:08 - Steve Kenny (Host)

So I had an interesting discussion where I sat on a panel last week over in the Middle East for the Professional Security and Safety Alliance and they were talking around artificial intelligence, whether it be in a friend or a foe, and obviously they have a huge audience which is looking at frontline security, and one of the risks that kept on being flagged up was people were worried that the adoption of some form of artificial intelligence was obviously going to take away their jobs, their opportunity to work. But what you've said earlier is how this will transform people into super humans in terms of how they can make decisions. How does that look? Or how do we reassure sort of frontline staff and garden companies that actually this is going to help you work more efficiently?

07:58 - Florian Matusek (Guest)

I think the best way is really to show them, to bring out features, sit down and show them hey, this is how you can do your daily operations easier. And I think the same thing applies in the security industry as in many other industries. And there's this famous quote by Chen Sun Huang, the CEO of NVIDIA, that was saying you won't be replaced by AI. You will be replaced by people using AI. And it kind of is the same thing here, because these technologies, they make us more efficient. We can be faster, we can solve cases faster, we can increase security. So the best way to convince people is really to show them. Sit down with them, because the reality is, these features will be there in the systems because they make sense. They won't be there for the sake of it, they won't be there because AI sounds cool, but they will be there because it makes sense. And once we ship them, we can show them hey, this is how you can do this task easier. This is how you can increase the speed of these kind of steps. These are things that you couldn't do before and will enable things that were just not possible before, and, in reality, that's a huge opportunity.

09:01

It's a huge revolution, but we have to make sure we're not jumping on on the hype train and really stay grounded and start with the problems first, because what happens a lot is with there's new technology and we as tech people we get super excited and we just want to leverage this new technology and put it into the product just for the sake of it, and very often that turns into trade show features.

09:24

They look great at trade show floors but in daily operations they're not being used and consumption gaps are just growing. Like we have more and more features that people are not using and we have to make sure we're not doing the same mistake with AI. So what we try to do at Genentech is we go to our end users. We sit down with them in hours of conversations with the operators and asking them, “so how does your day-to-day look like? What are the tasks that you have to do, what are the steps, where are your pain points?” And then we see which technology is the right technology to solve these problems, to make them more efficient, and many times it might be AI, but many times it might not be AI, and in the end we come up with features that really make daily lives better. But we have to make sure that AI is not a solution in search of a problem, but the other way around.

10:11 - Steve Kenny (Host)

Yeah, great point. When we think about sort of our industry and how it's going to benefit from it and the sort of the, you know, the development of artificial intelligence, um, obviously the discussions around analytics and AI are intrinsically linked, they are one in the same to the majority Do you think as an industry, we we've had our fingers burned in terms of the process and the evolution of analytics? Um, some 15 years ago, 10 years ago or even more recently than that, everyone was sort of pitching up at events, trade shows, they were selling the greatest and latest analytics to their customers and they didn't work and they were expensive and they actually caused more problems than they did solutions. Do you think that is potentially holding back the adoption of fantastic new opportunities in tech? Or do you think that we've come through that resistance because we are at a point now where we are starting to demonstrate the capabilities?

11:12 - Florian Matusek (Guest)

So one example that I'd like to give is when I started off in this industry 20 years ago, when we went to end users, end users were expecting the Ferrari of analytics. You saw it in the movies. You even saw Terminator or other like super fancy movies, and this is what people were expecting. But instead of the Ferrari, what the state of the art was, what we could deliver was a bicycle. Makes a lot of sense, a bicycle, but it has a very narrow use case. And nowadays, after so many years of development and improvements and machine learning and deep learning and AI, we got so much better. But today, when we go to end users, they're still expecting the Ferrari. But what we can deliver is a Ford Focus, which is such a great car and it's such a great improvement, but it's still not everything that is possible and everything we see in movies. So I think there is still a way to go on educating the market what is possible, what is not possible, because, to be honest, as an end user, it's very hard to filter through all the noise and all the buzzwords and try to find out what's possible and what's not. So we, as manufacturers, do have a certain responsibility to educate the market in this. But I do think also that there is an evolution also in the understanding of the market. Since AI is so commonplace right now, everyone has some idea of what it can do, and also through the addition of deep learning into the video analytics algorithms, they did get much more accurate and we are today at a point where we don't have to do a PUC for each and every video analytics project, because that was the case 10 years ago. If you wanted to do a video analytics project, you had to do a PUC before, because you had no idea what you're expecting and what will be delivered in the end. I think we're beyond this point nowadays where we have a certain expectation that it works, and it does work for very common use cases like perimeter protection, for example. But of course, there are very advanced ones and now we see all like new stuff around generative AI, where we will have this whole thing all over again, where we don't really know how it's working or not. So there is a certain evolution.

13:21

I think that there are other challenges today. Previously, you were employing video analytics, maybe on 10 cameras, and then you did a POC and it worked fine, and that's okay. Nowadays, the expectation is we want video analytics on almost any camera. Right, and projects are growing. So today a thousand camera project is not a big project anymore, it's almost like a medium-sized project.

13:43

But deploying video analytics on all of them if you do that on the server, for example, or on the cloud is almost unfeasible because you run into scalability issues. So these kinds of questions are much more prevalent where you have to look at the architecture where is my analytics running? Is it running on the camera or maybe on an appliance? How do you manage the whole thing? How do you configure 1,000 cameras? Is it self-configuring or is it not? If you have some load somewhere else, can you do load balancing? So these kind of questions become much more prevalent than the actual accuracy of the video analytics, because I think like there's a certain trust has been built already there, but we're still not there that like a huge deployment is easily possible. So there is still some way to go.

14:25 - Steve Kenny (Host)

So, from a solutions point of view, what would you say that the industry should feel comfortable? Sort of hanging their hat on a tech, saying we know this works today, it's tried, it's tested. There's lots of different deliverables. What are the sort of the tangible applications that you see fully operational today that people should feel comfortable with?

14:45 - Florian Matusek (Guest)

So one thing is basic detection that an object entered a scene. A person walking, walking into the front door, a vehicle entering an area. These are things that are quite robust and I think you can deploy this if you have a trusted vendor, of course without much worry. Another thing is forensic search, so searching through the data is quite established already. So one of the applications that people can really trust is basic detections of objects. So a person walking to the front door, a car going by in the parking lot. Those are things that are quite established and that will work if you have a trusted vendor, of course. So you can deploy this without much of a PUC, I think, and there's usually minimal configuration involved. Another thing is forensic search, so searching through the data that you have. This is a very easy application because it solves a huge use case in the industry.

15:44

Searching through large amounts of video through unstructured data is a huge problem for us and luckily it's also pretty well solved with forensic search tools, and one of the reasons is that for that, the accuracy of the detection is less important If you search for a person in a red shirt and nine people have a red shirt and a 10th person has maybe a black shirt, it doesn't really matter for this application so much, right. So you're searching for something and you can just ignore the rest. So this is why this is a very robust application where not so much configuration is involved. Where it's more complicated is more complicated use cases that are maybe connecting a car with a person. Multicamera tracking is pretty much still unsolved I would say.

16:30

Counting applications really depend. So that's something where you might want to test accuracy. It depends a lot on mounting positions and so on, and on the perimeter protection side, we're pretty far when it comes to thermal applications, thermal cameras, I think we're quite good in terms of accuracy. If you have more challenging scenarios, that's still something you want to take a closer look at. So, roughly, these are the applications that I would say work today.

16:58 - Steve Kenny (Host)

I think you did reference the accuracy on the people counting and I think it's important that if it is a mission critical, high security type environment where people counting needs to be 99.9% accurate, then obviously we need to be mindful of of how that is achieved. But when we look at organizations that are looking at business intelligence, digital transformation, some form of automatization, then then actually the accuracy can be a little bit less and that's where we see organizations that will truly benefit from these applications because they can make smart, quick, efficient decisions without having to rely on that life safety type accuracy?

17:37 - Florian Matusek (Guest)

Absolutely. It really depends on the use case. And even further, if we think about people counting, for example, at entrances and exits, if you don't use it for life safety applications, I believe a certain accuracy is already fine. It provides great information, information that you can also use in other types of systems. Just imagine you can export the data into maybe a tableau where you also feed in maybe HVAC information or other kinds of information and combine these data sources. I think this is very valuable. But even going further, if you think about crowd counting on the outside, if a police just wants to know how many people are roughly in this demonstration, a 90% accuracy might actually be fine. Or drawing comparisons, how many are there this Saturday compared to last Saturday? I think there it's absolutely fine. So, absolutely agree, as you were saying, it totally depends on the application. Accuracy that's okay for one application might not be okay for another.

18:36 - Steve Kenny (Host)

So, I think obviously it would be remiss of us not to discuss the sort of the different standards and regulations that are starting to impact how AI applications are being used, how they're being deployed and obviously earlier this year you've got the uh, the EU AI act, and I know from a, I know from a security industry point of view, and there was obviously an agreement and there was information that was provided by SEER, um, ASIS in the US and um, the international biometrics industry association, that was looking at an evaluation of the, the text and the framework within the AI Act and what that meant specifically for the security industry. How do you see that from your side, because I know that you obviously sit within the SEER steering group that was obviously instrumental in putting some form of communication together. What's your thoughts on that and how that will impact both the training of tech but also the deployment of tech moving forward?

19:40 - Florian Matusek (Guest)

So this is a topic that we discuss a lot in SEER in the advisory group. First, because it's important as an industry to understand what kind of regulations are there globally. We believe something similar will happen as with GDPR, where kind of Europe shows the way and similar regulation will be adopted in other jurisdictions as well. We see this in Canada. They're currently updating their data act. We saw the Biden executive order a year ago. There is stuff happening in Brazil, so it's important to be on top of it. And actually the drive or the idea is we, as security industry, we want to get ahead of the game, maybe even self-regulate to a certain degree, to make sure that we already comply to regulations before it's even issued, or maybe to not even make it necessary to have regulations. Because, at the end of the day, the problem with all these regulations is that they are too slow to keep up with technology. Just look at the EU AI Act. I got the first draft of the AI Act four years before it was published. We got it to review the draft and just imagine what happened in four years. Even they had to delay the regulation because in the middle, Chat GPT came out and they had to adapt to it. I do believe that the EU AI Act is actually a good thing where we need some kind of framework for companies on how to work around it. What will be important now going forward is to see how this is actually being implemented, how companies can implement the risk pyramid and the different risk levels and maybe even certify themselves. So this practical point of view, this will be interesting going forward, and it's something similar that happened with GDPR, where it took a few years to figure out how we do this in practice and how we do this in other regulations as well. But it is important to have a certain kind of regulation and it's like always with regulation, we will have to see how we can strike a balance.

21:44

Recently, we published our state of physical security report. That's a report we do every year where we interview thousands of people in our industry from end users, consultants, partners, also end users that are not our end users, but just from the industry just to get a feeling on what is important for the industry right now, and we also asked them the question about responsible AI. Is this a concern? Is it a concern for you? How AI is being trained, how AI is being used, is it ethical, is it not ethical? And, in fact, 78% of our respondents, 78% of our end users, said that this is a concern for them, and this is really interesting because this kind of topic is kind of new for everyone, right? We didn't have this before, and especially in the security industry. If you look at all the big ones, like Google and Microsoft, all of them have responsible AI guidelines, but in the security industry, that's still something new.

22:37

We introduced our own responsible AI guidelines internally just a few months ago to our development team, where we set a set of guidelines that are very close to the AI Act, actually, but what we tried to do when we created them was we want to do what's morally right, and a side effect of this is that we comply to the regulation. We don't do it because of the regulation. A side effect is the regulation, we want to do what's right, and this also enables us to adapt this as fast as possible as the technology evolves, and we don't have to wait for any kind of regulation. But essentially, we defined these guidelines. They're around three large principles.

23:16

They're around data governance, they're around trustworthiness and safety, and then how we keep humans in the loop, and they provide very detailed guidelines for teams working with AI how they source data, how they make sure that if we use a data set that hasn't been recorded in some kind of a Chinese prison or so, but we're actually allowed to use it and it's ethically sourced, how we treat the data with access to the data, how we test, how we are transparent about this and, of course, how we keep the human in the loop, and that includes involving end users early on like I said before, the problem definition, but also the operators while they're using the system.

23:53

How we make sure that the system is not making any decisions by itself and the human is always in the loop with any critical decisions. So, yes, it's an absolutely important topic and we believe much more needs to be done in the future and there will be more, but we try to also be a voice in the market to say to other manufacturers, “please take this seriously.”

24:18 - Steve Kenny (Host)

Yeah, I think, as with every area of the market, you're going to get people that are going to do it correctly. You're going to get them they're going to do professionally, ethically, and they're going to maintain the sort of the privacy considerations that are important.

24:28

You did touch base on one area, which is talking around the databases, and I think everyone's mindful that they don't want the the next scandal that we saw with the likes of Clearview AI, where they were sort of, I wouldn't say stealing sort of face data in order to train their databases, but they were certainly taking or getting access to that information from the likes of sort of Facebook, LinkedIn, instagram, those sort of platforms, without any sort of consent-based sort of, I guess, any consent-based authorization from those people. What do you think the industry needs to do in terms of the databases and how we're going to access them, because there are certain technologies out there that do some form of ethnicity profiling in facial recognition systems, which is now which is absolutely a no-no in the EU AI ACT. What do you think we need to do in terms of external public facing statements to the consumers to demonstrate that what we're doing is correct and we are compliant. Because we need to be more visible. We need to be more transparent in order to regain that trust.

25:43 - Florian Matusek (Guest)

Yeah, it's an important balance to strike because intuitively, of course, the best way is to be transparent about it, but at the same time, of course, data sets are also corporate IP, right, you can also just not make all the other data sets public.

25:52

So I think one way is you can find a certain framework how you can make your data sets auditable in certain kind of situations. So I think this is maybe a limited way how we can let other people actually look at the data sets. But I think more importantly is to be transparent about our processes, to show what kind of rules we have internally, what kind of checkboxes people have to check before they're using a data set, and be very transparent and vocal about this, so we can build trust with our end users and show them that we're not using their data without any consent and we are, of course, following all regulations. But even beyond that, it's really about building trust and, yeah, I think the main thing is really to communicate about it.

26:39 - Steve Kenny (Host)

What was interesting, when I was on a panel and we were discussing the potential risks, I heard the Axis CIO speaking around the assumption that within large language models Chat GPT, things like that that when we're pumping in data, he said one of the biggest risks is not what we put in, it's what we get out. Because human nature is we assume whatever we're getting, whatever that that is telling us, is true, and then we start to circulate that information without authenticating that actually what we've received back is actually correct and all of a sudden we're starting we've got the proliferation of misinformation because a large language model like a Chat GPT gave us a small piece of incorrect information. How do you think we address that risk?

27:27 - Florian Matusek (Guest)

So, two ways. One way is, of course, on the side of us as manufacturers when we develop the systems. There are a range of techniques to try to limit this kind of hallucinations. That's what we mainly call them. There are different ways. There's a huge thing about prompt engineering. There is using technology, just as RAG, which essentially is letting the LLM look facts up in a database and not make them up. So there are some ways how we can limit this, but it won't limit it 100%, because this is just the way that LLMs are being built and trained.

28:00

The fundamental problem is that they are optimized to solve the problem of giving an answer the user likes. So they tend to agree to the user. They tend to claim that they have the facts to the user because that's what they are trained on, that's what the feedback they got during the training phase, so obviously that's what they're doing. So it won't go away. So LLMs, that's just a fact of LLMs and there are some ways how to limit this. But as long as we have LLMs, we will kind of have this problem.

28:28

We do this on the manufacturer side, but there is something for all of us as consumers to learn this kind of sensitivity. It's a bit similar to showing kids how to use social media. I mean, if we just let them go there and just consume social media without having a critical eye, they will have a similar problem. And it's the same thing when we use large language models. So we have to train ourselves not to trust it 100%, to check the resources.

28:56

Many LLMs now, or many chatbots nowadays, actually cite the resources so you click on them and then you can actually read up on it. But this education I think we won't get around, and it's the same for the security industry, the same for our users. So we will have to provide warnings every time you use an LLM saying hey, this is AI generated, please double check. We will have to have training materials. Maybe there will be even training courses on this eventually, once we have more features around this. But as long as we have this kind of LLMs and generative AI, this problem will stick around. The only way really to overcome this is education.

29:34 - Steve Kenny (Host)

Great point. Just before we wrap up. What do you think are the biggest opportunities we should consider with AI and what do you also think are the biggest potential risks at the same time? Because there is great potential, but there is also great risk. What's your view on that?

29:54 - Florian Matusek (Guest)

Yeah, that's an interesting question. So, first of all, it really depends on what we mean by AI, right? AI obviously, if we talk about machine learning and deep learning, the great potential is really to get more accurate, to have super accurate applications that detect people in vehicles and then connect those across cameras. So I think there's a lot of potential there. But nowadays when we talk about AI, we mainly mean actually generative AI like LLMs, and I think we are, as an industry, not there yet to have figured out the use case that will change everything. What we will see in the coming years is a lot of experimentation features here and there. We'll see what makes sense for users, what doesn't make sense for users, and try to learn from it and get feedback. And, if we're honest, it's the same thing that's happening overall in the industry. If you have well, we in Europe don't have Apple intelligence, but in the US they have now AI and iOS and if you look at it, many of these applications actually don't make any sense. It's just there because AI is great and it's the same thing that what will happen there is that over time you will see what kind of features are useful and which are not, and the same thing will happen in the security industry, which is why I think it's important to lean back a bit, not be too anxious, not try to deliver too much too fast just for the sake of it, because you're feeling you're falling behind, but be deliberate and try to find the right features. So, I believe there will be useful features, but we haven't found them yet and it will probably take a few years. But it could be transformational, that's for sure. And in terms of risks, if we use LLMs, I think the biggest risk in the security in this well, two biggest risks.

31:40

The first biggest risk is that we will see applications where we let an LLM or an AI make decisions. This is always an issue because it's very hard to test and it might make the right decision in 99 cases and in the hundredth time it makes a wrong decision and in the security industry that's so risky and might cost lives. So we have to be super careful that we don't let it make any critical decisions. So that's one risk. And the other risk when we think about the AI giving you, like, a text output or a text summary, or so, that we have to make sure that we don't let the AI make any judgments, that the judgment should always be with the human. The AI is there to summarize things, to give you all the information, to give you the background, to prime you essentially to make the right decision, but the judgment if something is good or bad always has to be with the operator. And we have to keep this risk in mind when we develop features, because that could also lead to very bad consequences.

32:45 - Steve Kenny (Host)

Yeah, brilliant. So one last one, the audience and I'm very mindful that we've managed to squeeze a 48-hour conversation into 30 minutes today. AI as a topic can go on and on, and this is the start of most people's journey. Your one takeaway from the audience today when discussing your experience with AI, if you could give them one piece, one nugget of information, what would that be?

33:10 - Florian Matusek (Guest)

Don't focus on the technology. Focus on the outcomes. Focus on what it is that you need to achieve. What is your problem, what you want to optimize and then see if you find a tool that solves your problems. AI is just a tool, it's nothing more. What you need is to have a feature or a product that solves your problem, eventually to provide an outcome to you.

33:35 - Steve Kenny (Host)

Florian, thank you so much for taking the time to share your experiences today. I think the topic of conversation around artificial intelligence, we are at the start of this journey for many and I look forward to future discussions because I'm sure artificial intelligence will be an ongoing discussion that we see throughout this podcast. Thank you very much.

33:56 - Florian Matusek (Guest)

Well, thanks for the invite.

34:02

Thanks for tuning in to Security Tech Talk. If you've enjoyed today's episode, be sure to check out the other episodes for more insightful discussion and expert perspectives. Don't forget to subscribe so you never miss an episode. This podcast is brought to you by Axis Communications. Axis enables a smarter and safer world by creating solutions for improving security and business performance.

Show artwork for Security Tech Talk

About the Podcast

Security Tech Talk
Conversations with security industry disruptors and innovators
We talk to security industry leaders, disruptors, and innovators with strong views and opinions on the future of topics like physical security, smart buildings, artificial intelligence, cybersecurity and more. We dig into the latest tech trends, explore how security is shaping the world, and delve into those tricky regulations (like NIS2, the Cyber Resilience Act, the EU Artificial Intelligence Act, the UK's Product Security, Telecommunications, Infrastructure Act and more) that keep everyone on their toes. We are here to talk about technology trends, explore the big issues facing the security industry, and provide valuable insights that will support you and your business. Join us as we uncover important information to help you come away feeling well-educated and prepared for the future. This podcast is brought to you by Axis Communications Inc. - innovating for a smarter, safer world.

About your host

Profile picture for Steven Kenny

Steven Kenny

Steven Kenny – Manager, Architecture & Engineering Program – EMEA, Axis Communications.
With two decades of experience in the security industry, Steven Kenny has played active roles in numerous high-profile projects, both domestically and internationally. Over the last eleven years, his focus has been on understanding how security technologies can best support business security strategies, all while advocating for the heightened importance of cybersecurity and compliance within the physical security field.

Currently leading a team of Architect and Engineering managers across the EMEA region, Steven remains committed to contributing positively to global security practices. He is actively involved in industry associations and international standards organizations, seeking to collaboratively shape the landscape of security.

In a more behind-the-scenes capacity, Steven has provided consultative support to a national steering group instrumental in establishing the Secure by Design, Secure by Default certification. His close collaboration with the UK Surveillance Camera Commissioner reflects his dedication to enhancing standards in the physical security sector. As a speaker at international security conferences, Steven has modestly shared insights that have contributed to the industry's development and the identification of key technology trends.

Beyond his professional commitments, Steven has volunteered his expertise, previously serving as Director of Systems, Information, and Cyber Security for ASIS International and the UK chapter, before being elected as a board director. He also serves on the EMEA Advisor Council as the emerging technology lead for TiNYg (Global Terrorism Information Network). Additionally, he contributes to various standards committees supporting IoT security and plays a role in the BSI Private Security Management and Services. Steven Kenny's humble dedication has made a meaningful impact on the global security landscape, positioning him as a valued contributor to the industry.