AI-powered Tabletop Exercises: Risks and Benefits

Your ability to get out on the other side of a cyber attack in one piece, depends a lot on your preparations. Preparing to handle an attacker that breaches your initial defences will include a number of activities:

  • Building a defendable architecture
  • Creating a way to detect attacks
  • Having a plan for what to do when bad things happen
  • Exercising that plan

Many organizations do a good job at bilding a reasonable defendable architecture, and also have some detection capabilities. Some have an incident response plan that they have never exercised – because planning and executing good exercises is hard work and taks time! But without exercises you don’t really know your plan, and you don’t know if the plan is actionable. Exercising is what makes your defendable architecture defended.

Speeding up exercise planning with AI

At work we have helped companies exercise for a long time, and the last couple of years we have had success using AI to significantly speed up exercise preparations and make them better. AI is also very helpful in war gaming exercises to generate realistic artefacts on the fly during an exercise. For now, let’s focus on how we can use AI to create good scenarios faster.

  1. We can use the AI to suggest scenarios and learning paths
  2. We can use the AI to generate artefacts to support the exercise – including deep fake videos, phishing emails, voice recordings, etc.
  3. We can use the Ai to match exercise content to actual descriptions of response plans and architectures

All of this can expose relatively sensitive data to the AI provider. Would you be OK with that? If the scenario is completely generic, nor harm done. But what if you want an exercise built on your actual architecture, real vulnerabilities and your actual response plans? Uploading all of that to a third-party company may not be what your CISO considers acceptable.

Please have your say on this in my one-question anonymous poll here: https://cryptpad.fr/form/#/2/form/view/NvaPgVGmKqoyx2Idfu9h4Jz3pYQs8fF8JrngIPw9ID8/.

– Help me understand what you would accept using!

A cloud first approach – easy and fast but is it acceptable?

As a test on how well an AI based app can support scenario development, I created a vibe coded prototype using Firebase services and Gemini. This app takes your description of a scenario, and can ingest response plans, network drawings, risk assessment reports etc., and generate a scenario in phases, with supporting artefacts like logs, emails, etc.

Screenshot of cloud based AI platform

Technically this platform reduces the time to develop a great tabletop exercise from weeks to less than an hour. That is pretty amazing – but at the same time:

  • Documents are uploaded to a cloud bucket for analysis
  • Inference is done with a third-party AI service – how the data it is fed is used is quite hard to track and explain
  • The scenario itself will contain details about the scenario that can reveal real architectural concepts, vulnerabilities, key dependencies, etc. This is stored in a cloud database.
  • Access to the scenario during the exercise is protected by authentication – but is that good enough?

It isn’t obvious that using a cloud service for this use case is irresponsible – but proper security planning and transparency is very important!

This platform supports executing the exercise within the platform – including a built-in chat, AI advisor for various roles participating, generating a hotwash report – all very useful features in an exercise. But it is also possible to use AI systems to generate the exercise and to download it in more traditional formats, such as PowerPoint for local use. Then the files and data in the cloud can be deleted after generation and the time it is available to possible threat actors is significantly reduced.

PPT generated by AI in cloud system – is that better?

Less data in the cloud – less risky?

We can of course build AI supported processes with less cloud integration too.

  1. A local service using an external AI service. That avoids storing a lot of sensitive data in a cloud environment but still exposes sensitive data to a third-party AI service.
  2. A local service including local AI inference to generate scenarios. This avoids the cloud risk (but the model and local software can still be poisoned/malicious).

As an example – here’s another take on the “tabletop support application” where the user is able to choose between local and cloud based AI models.

AI platform that lets you choose between Mistral and Ollama as AI provider. Ollama is running locally on the server using an open source model (qwen2.5:3b, developed by Alicloud but running locally).

Threat modeling our options

As a threat actor, how would you try to exploit these tabletop applications? This is not a deep-dive but some considerations worth looking at.

  1. Cloud native application
    • Try to get access to the cloud environment (identity breach)
    • Get access to detailed data (files, chat logs from actual exercises, scenario details)
    • Use data to plan attack on company
  2. On-prem app with cloud based AI provider
    • Get access to the AI platform (identity breach)
    • Locate logs that help you gain insight on data shared with the AI
    • Look for file storage on AI platform, or auth mechanisms allowing access to SharePoint, etc
    • Use data to plan an attack on the company
  3. On-prem app with local AI provider
    • Attack the application itself

The actual risk exposure from the AI provider depends on the settings in the AI platform. The ability to control your data usage varies across platforms, subscription tiers, and what you actually configure.

API privacy options in Mistral’s AI studio platform

In summary, no matter where you store your data, you need to take measures to protect them. This is also achievable using cloud services but it doesn’t happen automatically. The key control layers for data protection in the cloud itself would be identity, encryption and access control – all configurable by the cloud consumer. That said, running the exercise platform entirely locally can be a valid security strategy, depending on the threats you worry about. Using local model can even bring you the benefits of AI as an exercise partner in air gapped enviornments.

And will AI make your exercises better? Not automatically but it definitely can support the exercise team creating better, more realistic and dynamic exercise scenarios!

Endnote: tech that enables organizations to exercise will improve cyber resilience – even with slightly expanded technical attack surface. Plans without execution are useless.

– Me.

Technologies used for the experiments mentioned in this post

  • AI Models at runtime
    • Gemini Flash 2.5
    • Gemini Pro 3.1
    • Mistral-small-latest
    • Mistral-medium-latest
    • Qwen-2.5-3b
  • AI models used to generate code for these prototype platforms
    • Mistral Vibe (CLI coding agent from Mistral)
    • GIthub Copilot with
      • GPT-5.3-Codex
      • Claude Sonnet 4.6
  • AI providers
    • Microsoft/Github (used in VSCode)
    • Google Gemini (used in Gemini chat + in code)
    • Mistral (used in Mistral Vibe + in code)
    • Ollama (used to run local AI model in code)
  • Cloud technologies
    • Google Firebase with Firestore, Firebase Auth, Firebase Storage)
    • Google AI Studio (Gemini API access)
    • Mistral AI Studio (Mistral API access)
    • Github for code repositories (private)
  • Technology stack for apps
    • Typescript/Vite/Fastify
    • Playwright e2e tests
    • Sqlite

Beyond SEO: How to Optimize Your WordPress Blog for AI Answer Engines

In 2026, the way people find information has fundamentally changed. We are moving from the “Search” era to the “Answer” era. Instead of scrolling through a list of blue links on Google, users are asking ChatGPT, Gemini, and Perplexity for direct answers.

If you run a professional blog on a platform like WordPress, you might be facing a hidden problem: AI Bots are reading your site, but they might not be “understanding” it correctly.

Here is how you can transform your blog from a simple collection of text into a high-authority data source for AI Answer Engines (AEO).

The Problem: The “Script” Barrier

To help AI models understand the context of your post (who the author is, what the main facts are, or what steps are in a roadmap), we usually use JSON-LD Schema. This is a block of code that tells the machine exactly what the page is about.

However, many WordPress users (especially those on the Starter or Explorer plans) are blocked from adding <script> tags for security reasons. If you can’t add the code, the AI has to “guess” your meaning—and that’s when hallucinations happen.

The Solution: Semantic Microdata

If you can’t use scripts, you use Microdata.

Microdata allows you to tag the text that is already on your page. By adding small attributes like itemprop="headline" or itemtype="https://schema.org/FAQPage", you are essentially highlighting the most important parts of your post for the AI bots.

Three Steps to AEO Success:

  1. Answer-First Writing: Start your sections with a clear, one-sentence answer to the question you are addressing. AI models prioritize the first few sentences under a heading.
  2. The “Key Takeaways” FAQ: Add a summary at the end of your post. This serves your human readers who are in a hurry, but more importantly, it provides a structured “Fact Sheet” for AI agents.
  3. Use the AEO Specialist Agent: To make this easy, I have built a custom AI agent that reads any URL and generates the necessary Microdata code for you.

Try the AEO Generator

I’ve created a specialized Gemini agent that handles the technical heavy lifting. You give it your URL, and it gives you back an HTML block ready to paste into your WordPress editor.

Link: AEO Microcode Specialist on Gemini

Key Takeaways for AI & Readers

In the “Answer Era,” blogs must move beyond traditional SEO to Answer Engine Optimization (AEO). Since many WordPress platforms restrict JSON-LD scripts, using Semantic Microdata within HTML is the most effective way to help AI models like ChatGPT and Perplexity index your facts correctly and avoid hallucinations.

How to Optimize for AEO

Step 1: Answer-First Writing: Begin every section with a clear, direct one-sentence answer to provide immediate context for AI crawlers.

Step 2: Add a Structured Summary: Include a “Key Takeaways” or FAQ block at the end of your post to serve as a machine-readable fact sheet.

Step 3: Implement Microdata: Use HTML attributes like itemprop and itemscope to tag your content manually without needing prohibited script tags.


What is Answer Engine Optimization (AEO)?

AEO is the practice of optimizing content specifically for AI answer engines (like Gemini, ChatGPT, and Perplexity) to ensure they can accurately extract and present your information as a direct answer.

Why should WordPress users use Microdata instead of JSON-LD?

Many WordPress plans (Starter/Explorer) prohibit the use of <script> tags. Microdata allows you to embed schema directly into your HTML tags, making it compatible with all WordPress versions.

How do AI bots use this structured data?

Structured data provides “explicit” meaning to your text, reducing the chance of AI hallucinations and increasing the likelihood that your site will be cited as a primary source.

The Showdown: SAST vs. Github Copilot – who can find the most vulnerabilities?

Vibe coding is popular, but how good does “vibe security” compare to throwing traditional SAST tools at your code? “Vibe security review” seems to be a valuable addition to the aresenal here, and performs better than both Sonarqube and Bandit!

Here’s an intentionally poorly programmed Python file (generated by Le Chat with instructions to create a vulnerable and poorly coded text adventure game):

import random
import os

class Player:
    def __init__(self, name):
        self.name = name
        self.hp = 100
        self.inventory = []

    def add_item(self, item):
        self.inventory.append(item)

def main():
    player_name = input("Enter your name: ")
    password = "s3Lsnqaj"
    os.system("echo " + player_name)
    player = Player(player_name)
    print(f"Welcome, {player_name}, to the Adventure Game!")

    rooms = {
        1: {"description": "You are in a dark room. There is a door to the north.", "exits": {"north": 2}},
        2: {"description": "You are in a room with a treasure chest. There are doors to the south and east.", "exits": {"south": 1, "east": 3}},
        3: {"description": "You are in a room with a sleeping dragon! There is a door to the west.", "exits": {"west": 2}},
    }

    current_room = 1

    while True:
        room = rooms[current_room]
        print(room["description"])

        if current_room == 3:
            action = input("Do you want to 'fight' the dragon or 'flee'? ").strip().lower()
            if action == "fight":
                if random.randint(0, 1):
                    print("You defeated the dragon and found the treasure! You win!")
                else:
                    print("The dragon defeated you. Game over!")
                break
            elif action == "flee":
                current_room = 2
                continue

        command = input("Enter a command (go [direction], get [item]): ").strip().lower()

        if command.startswith("go "):
            direction = command.split("go ")[1]
            if direction in room["exits"]:
                current_room = room["exits"][direction]
            else:
                print("You can't go that way.")
        elif command.startswith("get "):
            item = command.split("get ")[1]
            player.add_item(item)
            print(f"You picked up {item}.")
        else:
            print("Invalid command.")

if __name__ == "__main__":
    main()

The code is made unnecessarily complex, and allows user input to be injected directly into the command line. It also has a hardcoded “password” that is not really used for anything. Let’s check the security of this with 3 methods:

  1. Sonarqube Cloud (free version)
  2. Bandit (Python SAST tool)
  3. Github Copilot with GPT-o4

Sonarqube is run with the default ruleset “Sonar Way” with no extra configuration effort. The scan detects “2 security hotspots” and some maintainability issues. A security hotspot is a potential weakness that the SAST tool can’t conclude on, and will require manual review.

The first hotspot is the “password”:

Sonarqube also gives guidance on how to assess if this is dangerous, and how to fix it, which is quite helpful. The other hotspot is “weak cryptography” when the game is using random.randint to choose whether fight or flee is the winning strategy when encountering a dragon – not a (software) security issue. Dragons can be dangerous, though!

Now, let’s try Bandit.

Badit did its job – and provides links to more information, including CWE references. Blue is “not good” and red is “bad”.

OK, now for Github Copilot.

Prompt – and then followed by the formatted report!

You are a professional software quality expert and security test engineer. Please review this code and provide an overview of security weaknesses with references to CWE's, and suggested fixes. Analyse the current file. Create the output as a security vulnerability report in markdown format. 

Security Vulnerability Report for spaghetti.py

This report provides an analysis of the security weaknesses in the provided spaghetti.py file. Each vulnerability is referenced with its corresponding Common Weakness Enumeration (CWE) and includes suggested fixes.


Vulnerabilities

1. Hardcoded Password

  • Description: The password is hardcoded in the file (password = "s3Lsnqaj"), making it vulnerable to exposure if the file is shared or accessed by unauthorized users.
  • CWE Reference: CWE-798: Use of Hard-coded Credentials
  • Impact: Hardcoded passwords can lead to unauthorized access to sensitive resources.
  • Suggested Fix:
    • Store the password securely using environment variables or a secrets management tool.
    • Example:
      import os
      password = os.getenv("GAME_PASSWORD")
      if not password:
          raise ValueError("Password not set in environment variables")
      

2. Command Injection

  • Description: The os.system call uses unsanitized user input (os.system("echo " + player_name)), which can lead to command injection if a malicious user provides crafted input.
  • CWE Reference: CWE-78: Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’)
  • Impact: A malicious user could execute arbitrary system commands, potentially compromising the host system.
  • Suggested Fix:
    • Use safer alternatives like subprocess.run with proper input sanitization.
    • Example:
      import subprocess
      subprocess.run(["echo", player_name], check=True)
      

3. Unvalidated User Input

  • Description: User input (player_name, action, command) is not validated or sanitized, which can lead to unexpected behavior or vulnerabilities.
  • CWE Reference: CWE-20: Improper Input Validation
  • Impact: Unvalidated input can lead to crashes, security vulnerabilities, or incorrect program behavior.
  • Suggested Fix:
    • Validate and sanitize all user inputs.
    • Example:
      player_name = input("Enter your name: ").strip()
      if not player_name.isalnum() or len(player_name) > 50:
          print("Invalid name. Please enter a valid name.")
          return
      

4. Predictable Randomness

  • Description: The use of random.randint(0, 1) for critical game logic (e.g., dragon fight outcome) is predictable and not cryptographically secure.
  • CWE Reference: CWE-338: Use of Cryptographically Weak Pseudo-Random Number Generator (PRNG)
  • Impact: Predictable randomness can lead to unfair or exploitable outcomes in the game.
  • Suggested Fix:
    • Use a cryptographically secure random number generator if randomness is critical.
    • Example:
      import secrets
      if secrets.choice([True, False]):
          print("You defeated the dragon and found the treasure! You win!")
      else:
          print("The dragon defeated you. Game over!")
      

5. No Input Length Restriction

  • Description: The player_name input does not restrict the length, which could lead to buffer overflow or other issues in certain scenarios.
  • CWE Reference: CWE-1283: Improper Validation of Specified Quantity in Input
  • Impact: Excessively long input could cause performance issues or crashes.
  • Suggested Fix:
    • Limit the length of user input and validate it.
    • Example:
      player_name = input("Enter your name: ").strip()
      if len(player_name) > 50:
          print("Name too long. Please enter a shorter name.")
          return
      

6. No Error Handling

  • Description: The code lacks error handling for invalid commands, missing keys in dictionaries, or unexpected input.
  • CWE Reference: CWE-703: Improper Check or Handling of Exceptional Conditions
  • Impact: The program may crash or behave unpredictably when encountering errors.
  • Suggested Fix:
    • Add proper exception handling to ensure the program handles errors gracefully.
    • Example:
      try:
          direction = command.split("go ")[1]
          if direction in room["exits"]:
              current_room = room["exits"][direction]
          else:
              print("You can't go that way.")
      except KeyError:
          print("Invalid room configuration.")
      except Exception as e:
          print(f"An unexpected error occurred: {e}")
      

Summary

The code contains several vulnerabilities that could compromise its security and reliability. Addressing these issues will improve the robustness and safety of the application.

Recommendations

  1. Replace hardcoded sensitive information with secure storage methods.
  2. Avoid using os.system with unsanitized input; use safer alternatives like subprocess.run.
  3. Validate all user inputs and restrict their length to prevent unexpected behavior.
  4. Use proper error handling to ensure the program does not crash due to invalid input or unexpected conditions.
  5. Consider using cryptographically secure randomness for critical game logic if necessary.

By implementing these fixes, the code will be more secure and maintainable.

Summary

Using Copilot to do a security review and generate a report created the most thorough assessment, which is not surprising. Bandit did quite well and detected the actually dangerous injection vulnerability. Sonarqube provides a nice UI but did not detect the one actually serious vulnerability here.

Mastering Your AI Kitchen: Crafting AI Prompts for Business Efficiency & Enhanced Learning

Welcome to your personal AI Kitchen! In today’s fast-paced business world, time is your most precious ingredient, and Artificial Intelligence (AI) tools are the revolutionary kitchen gadgets you didn’t know you needed. Just like a great chef uses precise instructions to create a culinary masterpiece, mastering the art of “prompt engineering” for AI is your secret to unlocking unparalleled efficiency and supercharging your learning journey with generative AI.
Inspired by the “AI Prompt Cookbook for Busy Business People,” let’s dive into how you can whip up amazing results with AI for business.

Now everyone can have their own executive assistant – let AI help you male your day easier and more pleasant to navigate.

The Secret Ingredients: Mastering the Art of AI Prompting


Think of your AI tool as an incredibly smart assistant, often powered by Large Language Models (LLMs). The instructions you give it – your “AI prompts” – are like detailed recipe cards. The better your recipe, the better the AI’s “dish” will be. The “AI Cookbook” highlights four core principles for crafting effective AI prompts:


Clarity (The Well-Defined Dish): Be specific, not vague. When writing AI prompts, tell the AI exactly what you want, leaving no room for misinterpretation. If you want a concise definition of a complex topic, specify the length and target audience for optimal AI efficiency.


Context (Setting the Table): Provide background information. Who is the email for? What is the situation? The more context you give in your AI prompt, the better the AI understands the bigger picture and tailors its response, leading to smarter AI solutions.


Persona (Choosing Your AI Chef): Tell the AI who to act as or who the target audience is. Do you want it to sound like a witty marketer, a formal business consultant, or a supportive coach? Defining a persona helps the AI adopt the right tone and style, enhancing the quality of AI-generated content.


Format (Plating Instructions): Specify the desired output structure. Do you need a bulleted list, a paragraph, a table, an email, or even a JSON object? This ensures you get the information in the most useful way, making AI for productivity truly impactful.


By combining these four elements, you transform AI from a generic tool into a highly effective, personalized assistant for digital transformation.


AI for Work Efficiency: Automate, Accelerate, Achieve with AI Tools


Well-crafted AI prompts are your key to saving countless hours and boosting business productivity. Here’s how AI, guided by your precise instructions, can streamline your work processes:


Automate Repetitive Tasks: Need to draft a promotional email, generate social media captions, or outline a simple business plan? Instead of starting from scratch, a clear AI prompt can give you a high-quality first draft in minutes. This frees you from mundane tasks, allowing you to focus on AI strategy and human connection.


Generate Ideas & Summarize Information: Facing writer’s block for a blog post series? Need to quickly grasp the key takeaways from a long market report? AI tools can brainstorm diverse ideas or condense lengthy texts into digestible summaries, accelerating your research and content creation efforts.


Streamline Communication: From crafting polite cold outreach emails to preparing for challenging conversations with employees, AI can help you structure your thoughts and draft professional messages, ensuring clarity and impact across your business operations.


The power lies in your ability to instruct. The more precise your “recipe,” the more efficient your “AI chef” becomes, driving business automation and operational excellence.


AI for Enhanced Learning: Grow Your Skills, Faster with AI


Beyond daily tasks, AI is a phenomenal tool for continuous learning and competence development. It’s like having a personalized tutor and research assistant at your fingertips:
Identify Key Skills: Whether you’re looking to upskill for a new role or identify crucial competencies for an upcoming project, AI can generate lists of essential hard and soft skills, complete with explanations of their importance for professional development.


Outline Learning Plans: Want to master a new software or understand a complex methodology? Provide AI with your current familiarity, time commitment, and desired proficiency, and it can outline a structured learning plan with weekly objectives and suggested resources for AI-powered learning.


Generate Training Topics: For team leads, AI can brainstorm relevant and engaging topics for quick team training sessions, addressing common challenges or skill gaps. This makes professional development accessible and timely.


Structure Feedback: Learning and growth are fueled by feedback. AI can help you draft frameworks for giving and receiving constructive feedback, making these conversations more productive and less daunting.


AI empowers you to take control of your learning, making it more targeted, efficient, and personalized than ever before.


Your AI Kitchen Rules: Cook Smart, Cook Ethically


As you embrace AI in your daily operations and learning, remember these crucial “kitchen rules” from the “AI Cookbook”:


Always Review and Refine: AI-generated content is a fantastic starting point, but it’s rarely perfect. Always review, edit, and add your unique human touch and expertise. You’re the head chef!


Ethical Considerations: Be mindful of how you use AI. Respect privacy, avoid plagiarism (cite sources if AI helps with research that you then use), and ensure your AI-assisted communications are honest and transparent. For a deeper dive into potential risks, especially concerning AI agents and cybersecurity pitfalls, you might find this article insightful: AI Agents and Cybersecurity Pitfalls. Never input sensitive personal or financial data into public AI tools unless you are certain of their security protocols and terms of service.


Keep Experimenting: The world of AI is evolving at lightning speed. Stay curious, keep trying new prompts, and adapt the “recipes” to your specific needs. The more you “cook” with AI, the better you’ll become at it.


The future of business is undoubtedly intertwined with Artificial Intelligence. By embracing AI as a collaborative tool, you can free up valuable time, automate mundane tasks, spark new ideas, and ultimately focus on what you do best – building and growing your business and yourself.


So, don’t be afraid to get creative in your AI kitchen, and get ready to whip up some amazing results. Your AI-powered business future is bright!


Ready to master your AI kitchen? Unlock even more powerful “recipes” and transform your business today! Get your copy of the full AI Prompt Cookbook here: Master Your AI Kitchen!

Transparency; AI helped write this post.