AI Adoption Exposes New Cybersecurity Threats

AI Adoption Exposes New Cybersecurity Threats

full version at cryptopolitan

A cybersecurity expert disclosed that AI is an unsafe cybersecurity threat for corporations that use it as their service, not only through attack execution by criminals but also legitimately.

AI’s cyber vulnerabilities

While the threat posed by bad actors using AI to deliver attacks has been widely discussed, Peter Garraghan, CEO and CTO of Midgard, which provides cybersecurity for AI specifically, tells Verdict: “The problem I have in mind is a cybersecurity threat to AI which is the one on the topic.”

Predictably, the most common and thus at-risk use of AI by companies relates to chatbots that serve as customer service and whose tone and data are typically customized for the organizations they represent. This is where Dr. Garraghan, a computer science professor at Lancaster University specializing in AI security and systems, decided to find Mindgard in 2022. “AI is not magic,” he says. It is merely a combination of additional coding, data, and hardware. Given modern-day cyber threats, all these menaces can also manifest in AI.

Let’s think of it this way: suppose that an attacker uses a Process known as SQL injection. An attacker can use that process to exploit vulnerabilities in web form fields like website log in or contact form. Another way teleprompter ingestion can be utilized is prompt injection, which is used in public applications. First and foremost, if security is not adequately ensured, AI tools can practically be horse-whipped into leaking their source code and instructions, company IP, or customer data. Secondly, AI tools can be reverse-engineered like other software applications to identify flaws.

Securing AI systems

Garraghan says of the gravity of the problem: In the envisaged future, in a few years, the factor of national safety may arise, or the disenfranchised people may be at risk. The new information seems like a warning: a business should beware of released data.

Garraghan says, “What is more, AI can serve as a cyber-attack tool where it can leak data; the model will be able to provide the data you requested so kindly.“ Presenting an example in January, the right-wing social media platform Gab looked for tutorials that accidentally revealed their instructions. GPT-based platform OpenAI previously had its models misused, too.

Garraghan continues: “Information can be obtained from unauthorized VoIP attacks. Also, the AI can be reverse-engineered, and the system can be bypassed or tricked into open access to other systems. Thus, information leakage might be one of the major areas cut across all industries, including those that may be externally or internally facing.” He also enumerates other risks, such as model evasion, whereby the model is misguided or purposely evaded by fabricating deceitful data.

Garraghan pointed out that inserting malicious commands or tainting data with wrongly conceived solutions into audio tracks is another threat. Additionally, he notes that the general cyber impact is the reputational damage commonly experienced after cyber attacks. Similarly, the areas at greatest risk of the negative consequences of this sort of cyberattack are where the stakes are highest. With many industries and institutions that operate online, public safety and the lack of leaked confidential information, for example, are important values that financial services and healthcare sectors should prioritize.

Garraghan says: The link here is that the more structured and controlling you are in your industry and the more risks you face from AI (and also from experience: the less you use AI, the less you adopt), the worse your AI is. Maybe they are not the lazier ones. For example, they may simply be going through the most grave stages in their life.

Proactive AI security measures

AI will take care of those risks in any specific company, though he says it will require layers of security (because AI in business already needs them) for users’ protection. “You already have cybersecurity tools specializing in different sectors,” says Garraghan. “These may be security posture management tools, detection, response tools, firewalls, and support for the shift left design, such as code scanning. You will soon require AI in line with digital transformation.” The genre specializes exclusively in AI and ML, whereas systems like neural nets get mentioned only occasionally

You’ll require an application scanner of neural networks, a response system of all types of neural networks, security testing, and red teaming for better control between apps, and you will need a response system of all types of neural networks. It is easier to fix and remediate them early instead of runtime issues. However, the best practice we recommend to organizations that use AI models or AI applications and services purchased is that before it goes to production, it is easier to fix all the problems with models. Then, after it goes to production, only problems that need to be fixed may be identified.

In a nutshell, Garraghan’s take is as follows: “The top thing, if cybersecurity begins with I, is by simply swapping application or software with AI as a tool. No? You need application security to detect threats. AI as a tool is also included.”

Recent conversions

3400 THB to ETH 1 MKD to EUR 0.00066 BTC to ETH 0.0016 BTC to USD 18990 ISK to CHF 0.058 BTC to EUR 1 TWD to AUD 0.8 BTC to USD 0.00000180 BTC to GBP 12000 BITS to NZD 7530 KRW to EUR