Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Why Is Bitcoin’s Worth Down 4% to $68K Now?

    March 7, 2026

    One thing Cool Simply Occurred In Marathon

    March 7, 2026

    Ghazi Public Faculty System GPS Lahore Jobs 2026 for Academics 2026 Job Commercial Pakistan

    March 7, 2026
    Facebook X (Twitter) Instagram
    Saturday, March 7
    Trending
    • Why Is Bitcoin’s Worth Down 4% to $68K Now?
    • One thing Cool Simply Occurred In Marathon
    • Ghazi Public Faculty System GPS Lahore Jobs 2026 for Academics 2026 Job Commercial Pakistan
    • Dach, Dickinson relish opportunities with Oilers
    • Iran war sends US crude futures up 12% a barrel
    • This Jammer Wants to Block Always-Listening AI Wearables. It Probably Won’t Work
    • Center East battle enters sixth day as violence spreads throughout area
    • ‘Heated Rivalry’ dominates 2026 GLAAD Media Awards
    • Bitcoin Data Shows Why 3-Year Holders Avoid Losses
    • YouTooz Will Promote Metaphor: ReFantazio Figures and Plush
    Facebook X (Twitter) Instagram Pinterest Vimeo
    The News92The News92
    • Home
    • World
    • National
    • Sports
    • Crypto
    • Travel
    • Lifestyle
    • Jobs
    • Insurance
    • Gaming
    • AI & Tech
    • Health & Fitness
    The News92The News92
    Home - AI & Tech - OpenAI Introduces Codex Security in Research Preview for Context-Aware Vulnerability Detection, Validation, and Patch Generation Across Codebases
    AI & Tech

    OpenAI Introduces Codex Security in Research Preview for Context-Aware Vulnerability Detection, Validation, and Patch Generation Across Codebases

    Naveed AhmadBy Naveed AhmadMarch 7, 2026No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    OpenAI has introduced Codex Security, an application security agent that analyzes a codebase, validates likely vulnerabilities, and proposes fixes that developers can review before patching. The product is now rolling out in research preview to ChatGPT Enterprise, Business, and Edu customers through Codex web.

    Why OpenAI Built Codex Security?

    The product is designed for a problem that most engineering teams already know well: security tools often generate too many weak findings, while software teams are shipping code faster with AI-assisted development. In its announcement, OpenAI team argues that the main issue is not just detection quality, but lack of system context. A vulnerability that looks severe in a generic scan may be low impact in the actual application, while a subtle issue tied to architecture or trust boundaries may be missed entirely. Codex Security is positioned as a context-aware system that tries to reduce that gap.

    How Codex Security Works?

    Codex Security works in 3 stages:

    Step 1: Building a Project-Specific Threat Model

    The first step is to analyze the repository and generate a project-specific threat model. The system examines the security-relevant structure of the codebase to model what the application does, what it trusts, and where it may be exposed. That threat model is editable, which matters in practice because real systems usually include organization-specific assumptions that automated tooling cannot infer reliably on its own. Allowing teams to refine the model helps keep the analysis aligned with the actual architecture instead of a generic security template.

    Step 2: Finding and Validating Vulnerabilities

    The second step is vulnerability discovery and validation. Codex Security uses the threat model as context to search for issues and classify findings by their likely real-world impact within that system. Where possible, it pressure-tests findings in sandboxed validation environments. If users configure an environment tailored to the project, the system can validate potential issues in the context of the running application. This deeper validation can reduce false positives further and may allow the system to generate working proof-of-concepts. For engineering teams, that distinction is important: a proof that a flaw is exploitable in the actual system is more useful than a raw static warning because it gives clearer evidence for prioritization and remediation.

    Step 3: Proposing Fixes with System Context

    The third step is remediation. Codex Security proposes fixes using the full surrounding system context, with the goal of producing patches that improve security while minimizing regressions. Users can filter findings to focus on issues with the highest impact for their team. In addition, Codex Security can learn from feedback over time. When a user changes the criticality of a finding, that feedback can be used to refine the threat model and improve precision in later scans.

    https://openai.com/index/codex-security-now-in-research-preview/

    A Shift from Pattern Matching to Context-Aware Review

    This workflow reflects a broader shift in application security tooling. Traditional scanners are effective at finding known classes of unsafe patterns, but they often struggle to distinguish between code that is theoretically risky and code that is actually exploitable in a specific deployment. OpenAI team is effectively treating security review as a reasoning problem over repository structure, runtime assumptions, and trust boundaries, rather than as a pure pattern-matching task. That does not remove the need for human review, but it can make the review process narrower and more evidence-driven if the validation step works as described. This framing is an inference from the product design, not a benchmarked independent conclusion.

    Beta Metrics Reported by OpenAI

    OpenAI also shared beta results. Scans on the same repositories over time showed increasing precision, and in one case noise was reduced by 84% since the initial rollout. The rate of findings with over-reported severity decreased by more than 90%, while false positive rates on detections fell by more than 50% across all repositories. Over the last 30 days, Codex Security reportedly scanned more than 1.2 million commits across external repositories in its beta cohort, identifying 792 critical findings and 10,561 high-severity findings. OpenAI team adds that critical issues appeared in under 0.1% of scanned commits. These are vendor-reported metrics, but they indicate that OpenAI is optimizing for higher-confidence findings rather than maximum alert volume.

    Open-Source Security Work and CVE Reporting

    The release also includes an open-source component along with Codex for OSS. OpenAI team has been using Codex Security on open-source repositories it depends on and sharing high-impact findings with maintainers. They also lists OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium among the projects where it reported critical vulnerabilities. It says 14 CVEs have been assigned, with dual reporting on 2 of them.

    Key Takeaways

    • OpenAI launched Codex Security in research preview for ChatGPT Enterprise, Business, and Edu customers through Codex web, with free usage for the next month.
    • Codex Security is an application security agent, not just a scanner. OpenAI says it analyzes project context to identify vulnerabilities, validate them, and propose patches developers can review.
    • The system works in 3 stages: it builds an editable threat model, then prioritizes and validates issues in sandboxed environments where possible, and finally proposes fixes with full system context.
    • The product is designed to reduce security triage noise. In beta, it reports 84% less noise in one case, more than 90% reduction in over-reported severity, and more than 50% lower false positive rates across repositories.
    • OpenAI is also extending the product to open source through Codex for OSS, which offers eligible maintainers 6 months of ChatGPT Pro with Codex, conditional access to Codex Security, and API credits.

    Check out the Technical details. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.


    Michal Sutter is a data science professional with a Master of Science in Data Science from the University of Padova. With a solid foundation in statistical analysis, machine learning, and data engineering, Michal excels at transforming complex datasets into actionable insights.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUS crude jumps 10%, trades at $90 per barrel for first time since April 2024
    Next Article Trump honors Messi and MLS Cup champion Miami teammates
    Naveed Ahmad
    • Website
    • Tumblr

    Related Posts

    AI & Tech

    This Jammer Wants to Block Always-Listening AI Wearables. It Probably Won’t Work

    March 7, 2026
    AI & Tech

    X is testing a new ad format that connects posts with products

    March 7, 2026
    AI & Tech

    Life EV formally owns Rad Energy Bikes now

    March 7, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Demo
    Top Posts

    How to Get a Bigger Penis – The Stem Cell Secret to Natural Penis Enlargement & A Quiz

    February 22, 20261 Views

    10 Totally different Methods to Safe Your Enterprise Premises

    February 19, 20261 Views

    Oatly loses ‘milk’ branding battle in UK Supreme Courtroom

    February 12, 20261 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Demo
    Most Popular

    How to Get a Bigger Penis – The Stem Cell Secret to Natural Penis Enlargement & A Quiz

    February 22, 20261 Views

    10 Totally different Methods to Safe Your Enterprise Premises

    February 19, 20261 Views

    Oatly loses ‘milk’ branding battle in UK Supreme Courtroom

    February 12, 20261 Views
    Our Picks

    Why Is Bitcoin’s Worth Down 4% to $68K Now?

    March 7, 2026

    One thing Cool Simply Occurred In Marathon

    March 7, 2026

    Ghazi Public Faculty System GPS Lahore Jobs 2026 for Academics 2026 Job Commercial Pakistan

    March 7, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Advertise
    • Disclaimer
    © 2026 TheNews92.com. All Rights Reserved. Unauthorized reproduction or redistribution of content is strictly prohibited.

    Type above and press Enter to search. Press Esc to cancel.