Skip to content

Aspen: Guardian AI

Proactive security guidance for AI-generated code

Turn real CI security findings into continuously updated AI guidance, so coding assistants adapt to your project instead of relying on static rules that quickly fall out of date.

SJ_Human1920x1280_002_web

Keep AI Coding Assistants Aligned with Your Secure Coding Standards

AI coding assistants are increasingly responsible for the patterns that get repeated across a codebase. When those patterns are insecure, teams end up fixing the same vulnerabilities over and over again in CI. 

Aspen: Guardian AI closes that gap by connecting security findings directly to how AI assistants are guided—helping prevent known insecure patterns from being generated again. 

 

GuardianAI_steps_web_26

How Aspen: Guardian AI Works

  • Uses Real CI Security Findings 
    Aspen: Guardian AI analyzes SAST findings from your CI pipeline to identify recurring and high-impact vulnerability patterns specific to your project.

  • Converts Findings Into AI Guidance 
    Those findings are translated into concise, project-specific rules that guide AI coding assistants toward approved secure coding practices and away from known insecure implementations.

  • Updates Existing Rule Files via Pull Request 
    Guardian AI updates your existing AI rule files—such as copilot-instructions.md or CLAUDE.md—through reviewable pull requests, giving teams full visibility and control.

  • Evolves as Your Codebase Changes 
    As new issues appear, Aspen: Guardian AI continuously refines guidance so AI assistants stay aligned with changes in frameworks, architecture, and development practices.

  • Reduce Repeat Vulnerabilities From AI-Generated Code 
    Most security tools identify problems after code is written. Aspen: Guardian AI focuses on preventing the same problems from recurring. 

    When AI-generated code is flagged as insecure in CI, Guardian AI feeds that signal back into the assistant’s guidance—reducing the likelihood that the same vulnerability pattern appears again. Over time, this leads to fewer repeat findings and less remediation effort for AppSec and development teams. 

Reduce repeat vulnerabilities from AI-generated code

Aspen: Guardian AI adapts AI coding tools based on real security patterns, helping your team avoid the same insecure code over and over.
Security Journey Platform

Designed for Developer Workflows 

Aspen: Guardian AI integrates cleanly into existing engineering processes: 

  • No runtime enforcement or blocking controls 
  • No interruption to developer flow 
  • No replacement of existing AI coding assistants 

All updates are transparent, version-controlled, and delivered through standard pull requests.

SJ_Human1920x1280_001_web

How Aspen: Guardian AI Differs from Other AI Security Approaches for Secure Coding

Many AI security approaches rely on static rules or generic guardrails that are applied broadly across projects. These controls rarely reflect real vulnerabilities and often fail to reduce repeat findings. 

Other approaches typically: 

  • Apply fixed, one-size-fits-all AI rules 
  • Measure or score insecure AI-generated code 
  • Detect and remediate issues after code is written 

Aspen: Guardian AI: 

  • Learns directly from real CI security findings 
  • Updates AI guidance at the project level 
  • Reduces repeat vulnerabilities over time 
  • Improves how AI assistants generate code, not just how issues are reported 

Instead of enforcing static controls, Aspen: Guardian AI continuously adapts guidance based on how vulnerabilities actually appear in your codebase.

Frequently Asked Questions

What does Guardian receive from our environment?

Guardian receives only three inputs, all visible and auditable in your public GitHub Action configuration:

  • Scan results from your chosen security scanner (e.g., Snyk, Bandit, SonarQube).
  • The AI Coding Assistant Rules file content, a customer‑controlled file already present in your repo.
  • The instruction file’s filename/path, letting Guardian return the updated version to the correct location.

These are the only data elements sent to Guardian’s API. No source code is ever sent. 

Does Guardian have access to customer source code?

No. Guardian never receives or processes your source code.  

Guardian operates only on the vulnerability metadata your scanner produces and the instruction file you provide. 

What data is sent to the LLM?

Guardian sends a minimal, trimmed-down subset of the scan results (primarily CWE identifiers and minimal supporting context), plus the current instruction file and a Guardian-generated prompt. 

The LLM never receives:

  • Source code, repository metadata, customer identifiers, secrets or environment details

This ensures the LLM remains completely blind to customer identity and codebase details.

How do we know what Guardian sends to the API?

Everything Guardian transmits is fully visible in your GitHub Action. There is no hidden data, and customers can audit the exact request line by line. 

Does Guardian store customer data?

Guardian stores some vulnerability metadata, such as the vulnerability name, classification, and associated CWE tags.

No source code or instruction file contents are stored. 

Can Guardian avoid storing data associated with the customer account?

Yes. Guardian can anonymize customer data if requested.

This further prevents any direct association between vulnerability metrics and your identity. 

Does the instruction file contain sensitive information?

AI code assistant rule files should avoid sensitive data, since their contents are shared with code-generation systems.span>

Guardian never adds any sensitive data. Instruction files typically contain rules such as:

  • Code style conventions, preferred libraries and patterns, security requirements (e.g., “reject algorithm ‘none’”) 

These are not customer identifiers and do not reveal proprietary information.

Does Guardian learn from our code or use our data to train models?

No. Guardian does not ingest code, does not train or fine-tune models using individual customer data, and does not retain any customer information for model improvement.

How does Guardian ensure data sovereignty and privacy?

Guardian’s design philosophy is privacy first, as reinforced in the product vision:

  • Customer data stays within their environment.

  • Only the current rules file and CWE data from the SAST tools are used.  

How do we know Guardian will not break our compliance requirements?

Guardian aligns with common regulatory and internal requirements by ensuring:

  • No customer code is transmitted.

  • No customer data is used to train models.

  • Full auditability of data flow via GitHub Action.

Can Guardian operate across multiple repositories securely?

Yes. All vulnerability metrics roll up to a single tenant, ensuring:

  • No cross repo data leakage

  • Always isolated tenant boundaries

If you need additional levels of isolation, please contact support. 

Is Guardian open to code review or security review?

Yes. The GitHub Action that sends data to Guardian is fully open and reviewable. Customers can inspect:

  • What data is sent

  • How it is structured

  • When it is transmitted