AI Academy I AI Agent Hacks & GTC 2025 Insights
Talk #1 How to Hack an Agent – or Not · Thomas Fraunholz @Smart Labs AI Large language models (LLMs) are not as secure as they seem. Beyond their tendency to “hallucinate,” they can be manipulated using jailbreaks and adversarial prompts, bypassing safeguards designed to keep them in check. But the real challenge arises when...