Published on
Generative AI is quickly becoming the coding sidekick developers didn’t know they needed. It can dramatically speed up development from generating boilerplate functions to helping debug tricky logic. However, AI is not infallible. Just like you wouldn’t paste third-party code without testing it into your application, you shouldn’t hand control to an AI tool without carefully considering what’s in the prompt.
When you paste code, architecture details, or sensitive data into a public model AI request, you might reveal information that should never leave your secure environment. When you accept AI-generated output without question, you could be importing security flaws right into your product.
This is where the idea of secure prompting comes in.
Why Secure Prompting Matters for Developers
AI tools don’t inherently ‘forget’ what you share with them. Depending on the provider’s settings, prompts can be stored, reviewed or even used to train future models. That means a casually copied code snippet from production could end up outside your organization's control as output in another’s request.
The risks go beyond just unintended training:
- Data Source Prompt Manipulation – The malicious instructions hidden in external data for the model to consume and act on.
- Data and Model Poisoning – The data's provenance and accuracy have been corrupted.
- Direct Prompt Injection – The user of the model sends a well-crafted prompt to the LLM model to cause a response the designers didn’t want.
- Data Leakage to 3rd Parties – Using output from an LLM tool that is intellectual property, it is then integrated into your company.
Principles of Secure Prompting
Writing a secure prompt is less about creativity and more about appropriate patterns, multiple iterations, and discipline. These principles can help you avoid unintentional leaks and risky output:
- Minimize Sensitive Data – If the model doesn’t need the information to answer the question, leave it out.
- Abstract with Placeholders – For sensitive code, replace real function names, keys, or database schemes with neutral examples.
- Scope Narrowly – Ask for specific help (“validate user input in Node.js”) instead of vague, sweeping tasks.
- Guide Toward Security – Include security requirements in the request.
- Verify Output – Treat AI code as untrusted until you’ve reviewed and tested it.
These habits don’t just protect your organization; they strengthen your ability to think like a secure developer, even outside of AI interactions.
From Concept to Code
Let’s see what this looks like in practice:
Example 1
Insecure Prompt: Write me a Python login function using this code from our authentication model: [paste real code]
Why it’s risky: Proprietary code is now outside of the secure environment.
Secure Prompt: Write a secure Python login function that uses industry best practices for password hashing and input validation. Assume the existence of a placeholder function called authenticate_user()
Why it’s better: Keeps sensitive code private and clearly directs the AI toward secure patterns.
Example 2
Insecure Prompt: Write a Node.js function that authenticates against the GitHub API using this API key: ghp_12345abc…
Why it’s risky: Credentials are leaked into the AI system.
Secure Prompt: Write a Node.js function that authenticates against the GitHub API using an environment variable called GITHUB_TOKEN. Include best practices for storing and accessing the token.
Why it’s better: Promotes secure key management and avoids credential exposure.
Example 3
Insecure Prompt: Generate a login system that uses SHA1 hashing for passwords.
Why it’s risky: Explicitly asks for insecure practice; AI may comply.
Secure Prompt: Generate a login system in Python that uses bcrypt for password hashing, salting, and input validation. Ensure it resists common attacks like brute force and SQL injection.
Why it’s better: Explicitly directs the AI to modern security practices.
Secure Prompting as a Professional Skill
Secure prompting isn’t just about avoiding mistakes; it’s about building a professional habit. It blends a developer’s technical expertise with the AI’s ability to automate grueling tasks.
Think of it as part of the shift-left security approach: the earlier in the process you address security – even at the AI prompt stage – the fewer problems you’ll need to fix later on. As AI tools become more integrated into development workflows, knowing how to write prompts securely will be as essential as knowing how to commit code safely.
Next Steps
Generative AI is here to stay, and so is the need to use it wisely. Writing secure prompts is a small change in habit that can make a big difference in keeping your code, data, and organization safe. Security Journey’s AI/LLM Security Training equips developers with the skills to thrive in the age of AI.
Learn more about AI/LLM Security Training