The third and fourth biggest challenges are "false negatives" (32%) and "false positives" (28%). Unlike humans, AI has no context awareness, so it is difficult for it to interpret certain events if the model has not been informed in advance. The AI can then report a harmless activity in the company as suspicious (false positive) or, conversely, ignore a real threat (false negative), requiring additional attention from IT managers to sort through this information.
Interoperability : Integrating AI tools into existing cybersecurity infrastructure canphilippines telegram data be complex (26%)
Excessive trust in AI : too much dependence that leads to people being less alert (23%)
Adversarial Attacks : Attackers manipulate AI systems to avoid detection (22%)
Ethical concerns : bias in decision-making (20%)
Human support : Human expertise remains critical in threat analysis and decision making (15%)
Continuous adaptation : Attack techniques evolve rapidly, AI systems must be continuously trained (15%)
Regulatory challenges : e.g. complex data protection regulations that do not explicitly mention AI tools (12%)
Costs : AI solutions require both investment in technology and experts to operate the systems (11%)
While artificial intelligence can help companies make better strategic decisions in cybersecurity, it is becoming increasingly clear that they cannot ignore one important factor if they are to fully exploit its potential: the human factor.