Supporting Continuous Learning in AI Governance and Security
Posted by: Ed Dunnahoe
I’d like to begin this post with a heartfelt thank you to everyone who joined our recent Brick House webinar on AI governance. Many of you requested resources to help stay informed about AI developments and continue your learning journey, and we didn’t have the opportunity to answer as thoroughly as we would have liked. Since we were running out of time as these questions started coming in, I figured I’d take some time to compile some of the resources I use to learn and keep up [as well as I can in the time I have] with what’s going on in the industry. This will include the few resources we mentioned at the end of the webinar in addition to several others.
Obviously, these are just suggestions to check out. Even if you only look into the things on this list and don’t explore the rest of the Internet for yourself, it’s enough content that you’re not going to get through all of it accidentally. You may find that you need to pare it back or that it’s not really hitting the topics that are most relevant to your situation, and that’s fine. The expectation is that through these resources, you’ll come across other data sources that are more interesting and relevant. You can also constantly groom the list and prune others out until you dial it in to what you want.
This is what I use today, and as I’m sure you’re aware, this stuff moves too fast to stand still, so the list may look different for me tomorrow or next week, and is just meant to get you going. With all that said, let’s dive in:
Courses and Practical Exercises
Courses
I haven’t looked through all of the material or taken all of these from start to finish, but they are resources that looked promising enough for me to keep track of. They may not be perfect, but they should at least get you far enough along to start figuring out your own path based on what you need the most:
- DeepLearning.ai offers learning on specific AI topics that may be as quick as less than 2 hours per course. Some seem like they’d help you implement some individual use cases quickly, but some might find it feels like blindly following instructions without fully understanding the “why” behind what they’re doing.
- AWS training through AWS Skill Builder caters to both technical and non-technical audiences but may be of limited use if you’re not using AWS in your organization. As of this writing, an annual subscription to Skill Builder is just shy of $450. The analogue for Azure looks to be Microsoft Learn and the GCP side looks like Cloud Skills Boost.
- Udemy is a more general resource with a ton of courses about a ton of topics. There are AI courses that are leadership focused that don’t get too far in the weeds and are only a handful of 20-30 minute sessions long. There are others that get into more detail, which are 30-40+ hours’ worth of content and exercises. As of this writing, an annual subscription is around $200, so while it isn’t free, it’s far more accessible than SANS training.
- On that note, if you have money to burn or work for a company with a particularly generous training budget, SANS SEC535, SEC545, and SEC595 could be good options.
- If you’re looking for a group style training focused on secure development with AI, I’d be remiss if I didn’t mention the A.I. Security Fundamentals course that our own Application Security team offers. Reach out through the website if you’re interested.
Practical Exercises
If you’ve had enough of reading and just want to do something, these more practical resources might be more up your alley:
- The Crucible AI CTF from Dreadnode provides hands-on capture-the-flag challenges to test your AI security knowledge. If you’re familiar with HackTheBox for penetration testing, this is more or less the same thing for AI.
- The Gandalf prompt injection game offers an entertaining way to understand prompt engineering and security by giving you simple prompt injection challenges that get increasingly more difficult.
- Practical exercises for prompt injection through Immersive Labs progresses you through prompt injection challenges that get increasingly more difficult as you continue, like Gandalf.
Staying Informed: Newsletters
These are the newsletters I’m currently subscribed to. Honestly, it’s probably a bit too much content and there is a lot of overlap, but I generally feel *informed enough* when having conversations with other folks in the industry as long as I stay on top of them. They also (usually) don’t go so deep into the true data science academia that I get lost, which is helpful:
- TLDR Tech: There are different subscription “versions” available for this one, for AI versus InfoSec, and others. The AI version is true to its namesake and helps make you aware of important events while providing links to drill into the stories that grab your attention when you want more detail.
- Ben’s Bites: Similar to TLDR, this one is well balanced between being too long and giving you enough detail to know what’s going on and whether you want to read more. You can also pay for a subscription that will get you access to basic tutorials and training. They also maintain a list of AI tools that has been a good reference for me.
- AI News: This one summarizes a LOT of data sources, including the conversations from several Discord servers. It can get more into the weeds than the others and it’s also much longer than the others. It’s good information and it’s comprehensive, but it’s not a quick read.
- Import AI: This one is similar to TLDR and Ben’s Bites, but it covers fewer stories in a little bit more detail and offers a summary of “why this matters” type opinion from an expert in the field, which I find valuable.
- The Batch: This one is curated by Andrew Ng’s team (who is one of the people I suggest following down below) and it feels like more of an expert’s perspective on what is going on currently, rather than an exhaustive list of headlines.
- Unsupervised Learning: Daniel Miessler’s newsletter is the one we were talking about towards the end of the webinar. This one is usually also not a short read (relative to the others), but he uses very little fluff and has a good mix of AI and InfoSec topics. He does include commentary, but I feel like he does a good job of separating facts from his opinion rather than just jumping straight to telling you how to think about something.
Social Media Follows
I personally don’t care for social media in general and I rely more heavily on the email newsletters to keep up. That said, these are the folks that I follow on the off chance that I want to check in and see what’s going on in real time. There are TONS of others that you could follow, but I don’t spend enough time on the platform to keep up with even this small list, much less adding more. Some of them also get into topics that I don’t understand, and others simply just let too much of their personal or political views shape their commentary for my tastes. If you don’t like X, you can probably find these same people on the platform of your choice.
- Ben Tossell (@bentossell)
- Andrew Ng (@AndrewYNg)
- Allie Miller (@alliekmiller)
- Bernard Marr (@BernardMarr)
- Daniel Miessler (@DanielMiessler)
- Andrej Karpathy (@karpathy)
- Speaking of Andrej, he has posted a couple of videos recently. In the first, he explains in depth how LLMs like ChatGPT work, and in the second, talks about how he personally uses LLMs. They’re 2 hours and 3.5 hours long respectively, so they’re hardly something you’ll get through quickly, but they’re worth making time for if you’re serious about building a foundation of knowledge.
Hopefully this post does better justice to at least some of the great questions you all posed during the webinar than what we were able to cover in the closing minutes. If you have some resources that you find indispensable that didn’t make this list and you were linked here from a social media post, I’d be interested in having a look if you’d consider dropping it in the comments of the post that brought you here.
Again, many thanks for those that were able to join the webinar! I truly enjoyed how engaging it was and how you all led the discussion with your interests and concerns rather than us having to just talk through a list of topics hoping that you found them interesting.
Before you go…
Obligatory Mention: AI Generated Content
Whatever your feelings may be, what’s a post about AI without including some AI-generated content? Here’s the response when I asked an LLM for guidance on how you might set yourself up to start consuming these resources and deepening your understanding:
Creating Your Learning Plan
I recommend starting with:
- Subscribe to 2-3 newsletters that match your information consumption style
- Follow key thought leaders on social media
- Choose one course or practical exercise aligned with your immediate learning goals
- Set aside regular time for AI learning and exploration
The field of AI is vast, and continuous learning is essential. These resources will help you build a solid foundation while staying current with new developments.
Community Engagement
Many of you expressed interest in continuing our discussions beyond the webinar. Feel free to share these resources with your colleagues and consider forming study groups around specific courses or topics.
I hope these resources prove valuable in your ongoing AI education journey and thank you again for your engagement during our webinar.
Ed Dunnahoe
VP, Innovation,
GuidePoint Security
Edward ("Ed") Dunnahoe has over fifteen years of information security experience, with an emphasis on vulnerability and risk assessments, audits, social engineering, penetration testing, and information security program management. Defensively, he has been responsible for information security program development and engineering objectives, including security monitoring, incident response, and vulnerability management. As a consultant, he delivered a full spectrum of information security consulting services in environments ranging from startups to Fortune 50 companies and significantly contributed to the growth of GuidePoint's red team practice in both delivery and leadership capacities for nearly a decade. Currently, Ed leads an AI center of excellence, overseeing enterprise-level implementation of generative AI technologies and coordinating AI-centered professional service development, with a focus on enhancing cybersecurity capabilities through artificial intelligence.
Ed holds a bachelor's degree in Information Systems and Decision Sciences from LSU and maintains an extensive portfolio of industry certifications, including CISSP, SSCP, CISM, OSCP, and nine SANS/GIAC certifications, including the prestigious GIAC Security Expert (GSE). He was also among only 25 professionals worldwide to achieve both the Red and Blue Team Cyber Guardian designations from SANS before the program was retired in 2021.