Can an AI model reject your mortgage application on the basis of a learned societal bias? Will you be victim to unethical or arbitrary machine-driven decisions? In the mind-spinning world of generative AI, the unveiling of Enkrypt AI’s Language Learning Model (LLM) Safety Leaderboard marks a step forward in creating guardrails around the adoption of these powerful technologies.
Enkrypt’s aim is to define the standards of transparency and security in AI technology. The LLM Safety Leaderboard addresses this challenge head-on by enabling companies to quickly identify the safest and most reliable AI models tailored to their specific needs.
Trust in tech
This tool does more than just list options; it serves as a guide, highlighting vulnerabilities and fostering trust in technology by emphasising tech reliability.
The need for such innovations is urgent. The rapid integration of generative AI into even regulated sectors underscores the growing concern among cybersecurity experts about the security and safety of large language models. This concern is not unfounded.
Take, for example, the potential repercussions in scenarios where an AI system might inadvertently perpetuate societal biases—like a fintech application rejecting a loan application without sufficient transparency. Such instances not only highlight the inherent risks in current AI models but also underline the critical need for stringent checks and balances.
Enkrypt AI’s initiative allows AI engineers and tech teams to make informed decisions grounded in ethical and regulatory considerations. This is more than technological advancement; it’s about shaping a future where AI can be trusted and is beneficial for all.
The firm believes it is setting a new standard in the field. The leaderboard is part of Enkrypt AI’s broader Sentry suite, which is designed to bolster the deployment of LLMs with an added layer of security.
This suite, which includes Sentry Red Team, Sentry Guardrails, and Sentry Compliance, offers a comprehensive approach to managing and securing LLMs that meets the highest standards of privacy, security, and compliance.
Embedded ethics
The significance of the leaderboard extends beyond its immediate practical applications. It represents a commitment to ethical responsibility and compliance, testing for biases, toxicity, and adherence to regulatory requirements. This ensures that the models align not only with technical needs but also with corporate values and societal norms.
Furthermore, recent findings from Enkrypt AI’s preprint paper on “Increased LLM Vulnerabilities from Fine-tuning and Quantization” show how critical these efforts are. The study highlights that while these common practices can enhance the performance of LLMs, they also heighten security risks. However, solutions like Enkrypt’s Sentry Guardrails have proven effective in mitigating these risks, emphasising the importance of external safety measures.
Sahil Agarwal, CEO of Enkrypt AI, and Prashanth Harshangi, CTO, said the launch of the LLM Safety Leaderboard underscores the company’s dedication to enabling the safe and responsible use of generative AI. It’s not just about managing risks—it’s about empowering enterprises to embrace AI with confidence, ensuring that the digital decisions of today do not become the ethical dilemmas of tomorrow.
This is a call to action for all stakeholders in the AI ecosystem to prioritize safety and security as we forge ahead into this new digital era. The LLM Safety Leaderboard isn’t just a tool; it’s a beacon guiding us towards a more secure and equitable AI future.