Securing Generative AI: Strategies, Methodologies, Tools, and Best Practices
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 3h 38m | 585 MB
Instructor: Omar Santos
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 3h 38m | 585 MB
Instructor: Omar Santos
This course offers a comprehensive exploration into the crucial security measures necessary for the deployment and development of various AI implementations, including large language models (LLMs) and retrieval-augmented generation (RAG). Discover key considerations and mitigations to reduce the overall risk in organizational AI system development processes.
Author and tech trainer Omar Santos covers the essentials of secure-by-design principles, focusing on security outcomes, radical transparency, and building organizational structures that prioritize security. Along the way, learn more about AI threats, LLM security, prompt injection, insecure output handling, red team AI models, and more. By the end of this course, you’ll be prepared to wield your newly honed skills to protect RAG implementations, secure vector databases, select embedding models, and leverage powerful orchestration libraries like LangChain and LlamaIndex.
Learning objectives
- Explore security for deploying and developing AI applications, retrieval-augmented generation (RAG), agents, and other AI implementations.
- Leverage hands-on practical skills drawn from real-world AI and machine learning cases.
- Incorporate key security considerations at every stage of AI development, deployment, and operation.