Now AI Apps Can be Monitored in Real-Time with AI Control Center to Stop Harmful User Input

Now AI Apps Can be Monitored in Real-Time with AI Control Center to Stop Harmful User Input

full version at cryptopolitan

A startup for artificial intelligence observation WhyLabs debuted a new AI product that will provide real-time control of the outputs of their AI apps. The platform called AI Control Center is designed to monitor and control the operations of AI solutions to ensure their reliability and safety, as traditional monitoring systems are not capable of observing AI reliably.

WhyLabs introduces the AI Control Center

The company said that there are many challenges that businesses are facing today, especially regarding LLMs. As the teams at the companies have to ensure that prompt injection attacks are not making their LLMs susceptible so that they do not provide harmful or inaccurate outputs that can damage their reputation.

The company’s solution can assist in tackling these issues by evaluating the operation of LLMs in real time. As it can deal with the entire range of aspects that can be problematic for LLMs, for example, user prompts, the process of generating responses (RAG), and the feedback that it generates. Alessya Visnjic, CEO and co-founder of WhyLabs, mentioned that

“Our customers are moving Generative AI initiatives from prototypes to production, where security and quality are paramount.”

Mentioning traditional observability tools that only perform passive monitoring, she added,

“Passive observability tools alone are not sufficient for this leap because you cannot afford a 5-minute delay in learning that a jailbreak incident has occurred in an application. Our new security capabilities equip AI teams with safeguards that prevent unsafe interactions in under 300 milliseconds, with 93% threat detection accuracy.”

Source: Businesswire.
Source: WhyLabs.

Traditional monitoring vs. AI-based monitoring for

According to Visnjic, the difference between traditional monitoring and AI observability is that the key point to look for is pointing out the risk factors to measure and the procedure adopted to measure them in a consistent way. She says,

“LLMs and gen AI applications open up a whole new set of security challenges that we haven’t solved before.”

Source: Siliconangle.

She explains the challenges as to how a system pinpoints a prompt injection or jailbreak, including any type of input with bad intent from the end user. AI Control Center can provide companies with real-time control of AI application performance, which helps cybersecurity and business teams to tackle many risks, WhyLabs claims. Businesses can fine-tune the observability system and also set their own set of rules for advanced threat detection for more precise control, which can also be a continuous exercise with emerging threats.

In this way, teams can build high quality data sets based on their AI application interactions with users, as the platform also allows investigations into the problems that appear with time. These capabilities can now be availed of at WhyLabs AI Control Platform, along with many other tools for AI models’ performance, as the company is expanding its offering with a backing of $14 million. The key investors are Bezos Expeditions and AI Funds, and Madrona Venture Group.

The Original story can be seen here.

Recent conversions

1000 ALL to EUR 10 SOL to USD 100 ETH to BTC 3000 LKR to GBP 0.4 ETH to AUD 1 RON to NGN 980 DOGE to BTC 0.72 BTC to GBP 1 COP to NGN 9600 ISK to CZK 0.000018 BTC to ETH