Charting a Secure Course in Generative AI: Our Investment in Liminal 
Investment Announcement
07/11/2024

Charting a Secure Course in Generative AI: Our Investment in Liminal 

By Christian Ostberg, General Partner

Generative AI has revolutionized how businesses and individuals operate, offering vast improvements in productivity and efficiency. However, large enterprises face significant challenges in adopting this technology, especially in managing data privacy and security. Recent developments in generative AI and the speed at which new tools are coming to market underscore the urgency of addressing these issues as enterprises work to integrate AI responsibly. 

According to an IBM study, while 94% of executives acknowledge the importance of securing AI solutions before deployment, only 24% plan to integrate cybersecurity measures into their generative AI projects in the next six months. This gap underscores a significant challenge: balancing the rapid implementation of AI with the necessity of robust security measures. As AI budgets are expected to triple between 2023 and 2025, driven by large companies, the willingness to adopt AI despite potential risks raises critical questions about data security. 

Generative AI’s inherent ability to produce infinite real-time outcomes introduces new security challenges for enterprises. IBM also found that 96% of executives believe adopting generative AI increases the likelihood of security breaches within their organizations over the next three years. In the context of AI security, enterprises must navigate three key challenges: 

  • Regulatory Compliance: Generative AI applications often handle compliance-defined data types, posing a high risk of data leakage and non-compliance. Regulations such as HIPAA, CCPA, and GDPR mandate stringent controls to protect sensitive information. Accidental leaks can lead to severe penalties, breach compliance requirements, and erode customer trust and confidence. 
  • Sensitive Data and IP Leaks: The risk of intellectual property (IP) and enterprise data leaking into public domains or back into large language models (LLMs) is significant. Cyberhaven data suggests 6,350 attempts to paste corporate data into ChatGPT per 100,000 employees. With the average cost of a data breach reaching $4.45M globally, and IP typically constituting 65% of the total value of Fortune 500 companies, there is a strong economic incentive to mitigate these risks. 
  • Reputational Risk: The use of generative AI, especially in customer-facing settings, can expose companies to reputational risks. AI models may inadvertently generate or interpret offensive, discriminatory, or inappropriate content. Such incidents can damage a brand’s integrity and public perception, impacting key business metrics.

For large enterprises, especially those in high-risk and regulated sectors, achieving complete control over data privacy, security, and sovereignty is essential to fully realize the benefits of generative AI. 

Introducing Liminal 

It was through this lens that we were excited to lead Liminal’s recent funding round. Led by Steven Walcheck and Aaron Bach, Liminal seeks to unlock a reliable and efficient way for enterprises to adopt generative AI while managing data security and compliance protocols. Liminal’s platform is designed to secure every interaction with generative AI, and do so in a manner that offers a seamless experience for end users while also providing security teams with comprehensive AI oversight.  
 
A few of the many reasons why regulated organizations leverage Liminal: 

  • Fast, Accurate, Intelligent Data Protection: Liminal identifies sensitive data in prompts, enforces organizational security policies against that data prior to submission, then rehydrates the detected components upon return – all in a contextually-aware manner that optimizes security and minimizes user disruption. 
  • Complete Observability: Liminal provides comprehensive, real-time insights into generative AI usage across the entire team, in interactions with any model, instance, or application. 
  • Unlimited Applications. One Security Platform: The Liminal Platform is multi-model and model-agnostic––supporting more than 100K generative AI models and applications––allowing teams to leverage the tools they prefer without compromising on security. 
  • An Optimal End-User Experience: Liminal’s workflow productivity tools integrate cleanly, work anywhere, provide model optionality, and include features like contextual awareness and rehydration to preserve and maximize the intended user experience. 

We welcome Liminal to the Fin Capital portfolio and look forward to supporting their mission to enable secure and efficient generative AI adoption for regulated enterprises.