Generative AI was a groundbreaking leap forward in the adoption of artificial intelligence.
Sadly, new technology often involves new risks for organizations. Data leakages and security issues at global Gen AI players in the last few months put the topic in everyone's focus.
What are technological risks, how do threat vectors look like, and what can be done to secure Large Language Models in production?
Mnemonic AI is at the forefront of LLM research and development and you can join us on May, 15th at Google Austin when we discuss LLM security with folks from Gitlab, DoiT, and Google.
Free spots are limited, registration open until Friday, 10th:
2Gather: Security and AI