NEW FEATURE
Cobalt PtaaS + DAST combines manual pentests and automated scanning for comprehensive applications security.
NEW FEATURE
Cobalt PtaaS + DAST combines manual pentests and automated scanning for comprehensive applications security.

Nurturing the Security Mindset

Many organizations seek to embed security and technical risk management into their development culture, making security a mindset and…

Many organizations seek to embed security and technical risk management into their development culture, making security a mindset and fixture in daily practices. It seems logical to have the development or DevOps teams own their cybersecurity responsibilities as part of a security development lifecycle (SDL) program. This alleviates load off of a separate, dedicated cybersecurity review team — which often becomes a chokepoint in fast-pace Agile environments and proves challenging to scale to the quantity of software engineering projects many technology-focused organizations are routinely doing today. Proactive mitigation or eradication of security vulnerabilities at the point of origin/introduction is typically more cost-effective than reacting to vulnerability escapes, incidents, and breaches after the software has been deployed.

As great as all of that sounds, the journey to widespread developer adoption of cybersecurity responsibility and high-caliber risk management activities is often met with challenges that impact the efficacy of the application security agenda. Domain expertise issues are often an encountered barrier to entry. Application security processes, methodologies, and expectations require a deep level of cybersecurity understanding to wield effectively. Developers and specialty engineers have presumably already selected their professional expertise focus: development. While they may accommodate basic cybersecurity principles and recognize security as an important subclass of overall quality, many developers do not have the time or inclination to invest in cybersecurity domain expertise to the level of a full-time security professional. Thus, cybersecurity principles need translation into developer-centric terms rather than expecting developers to learn a second domain of expertise in security.

“Note that a sufficiently robust design review process cannot be executed at CI/CD speed.” -Building Security In Maturity Model 9 (BSIMM9, AA1.2)

And it’s not just the application security body of knowledge — all of the traditional cybersecurity playbooks need to fit into the existing development processes for adoption to be practical. Agile methodologies, iterative/incremental functionality growth, and CI/CD pipelines are heavily disrupted by manual security review checkpoints and cumbersome processes/tools (vulnerability scans, penetration tests) that requisite a total application or ecosystem approach to risk review. Cybersecurity policies that require divergent handling of security defects can complicate general defect management flows that otherwise respond to defect implications uniformly. Even risk management KPIs/OKRs and metrics can clash: engineering teams track, manage, and consider all defect types equally, while application security programs narrowly focus on just security defect subtypes for worldview reporting. Cyber risk management programs have become accustomed to reporting application security risk in terms of OWASP Top 10 coverage, CWE categorization, STRIDE impact, and CVSS magnitude. Expecting engineers to special case security defects and provide extra metrics to retain consistency with historical cybersecurity risk reporting puts burden on the engineering teams for the sake of not having to change a risk management dashboard.

In short, organizations should not expect to simply ‘lift and shift’ application security and risk management practices out of the full-time cybersecurity professional domain and into the engineering domain. A lot of methodologies, metrics, and governance in the application security and cyber risk management space are relatively “by security professionals, for security professionals” requiring reasonable investment in cybersecurity domain expertise to wield successfully. Asking developers to engage known application security methodologies born and bred in the cybersecurity industry is like asking everyone to learn the law in order to alleviate the basic needs for lawyers and corporate counsel. The amount of learning necessary is simply too great to be practical.

When looking at the latest Building Security In Maturity Model report (BSIMM9), there is reasonable architecture analysis/design review (AA) adoption by participating organizations, with one caveat: it is generally performed by a dedicated software security group (SSG) separate from the actual developers. Only 12% of the BSIMM9 participating organizations defined an architectural threat modeling process standard for non-security groups to use (BSIMM9 AA2.1), and a mere 3% have non-security architects performing the analysis (BSIMM9 AA3.1). The advertised 3% does not indicate whether the low adoption is caused by disinterest in trying or by failed attempts — although the 12% of organizations creating a process for non-security groups to use (AA2.1) infers minimal organizational interest to have non-security groups even attempt some sort of analysis.

Bringing Security to Non-Security Professionals

Organizations should look at approaches that naturally have engineering teams discovering and mitigating cybersecurity defects more effectively within their existing engineering processes. If we simply consider security is a subset of quality, and security defects are just a subset of defects, then one could focus on how activities and advantages serving a cybersecurity agenda can be introduced into existing engineering quality and defect management processes.

Sounds good … but where should an organization begin? What are the opportunities that offer good cyber risk management value benefit with minimal cybersecurity domain expertise required? Saying “just do security stuff” doesn’t offer confidence towards strong security assurance.

Automated application security tools, particularly vulnerability scanners and security static analysis tools, are typically the first things to come to mind. The premise is attractive: let a tool loaded with security domain knowledge under-the-hood do all the work, leaving engineers to just tackle the findings as part of typical defect management. Hurdles and snags, however, are often encountered when an engineering team puts this idea into practice:

  • False positives can be overwhelming, and security domain expertise is often needed to qualify the findings

  • The quantity of findings (false or otherwise) may dissuade the engineering team from taking the results seriously

  • Without tool configuration/tuning (often requiring domain expertise), false negatives are likely; in particular, accurate configuration/identification of application entry points is critical for good coverage

  • Security domain expertise is often required to triage and translate results typically expressed in security metrics into a more generalized engineering defect representation

  • Scanners and analyzers have historically been clumsy and slow to effectively integrate into CI/CD; ad-hoc usage separate from CI/CD pipeline is reasonable, but that causes separation and distinction from existing quality and defect management processes

The reality falls short of the dream. That doesn’t mean, however, that automated security tools are absent from engineering toolchains/pipelines. Various development environments (Visual Studio, XCode, Android Studio, etc.) include built-in checks and lint rules for certain kinds of security problems. Various compilers like Clang and Golang now come with memory (MSAN), address (ASAN), and race condition checkers to catch various classes of engineering defects that also happen to cover some classes of security defects. While far from being comprehensive (in terms of cybersecurity coverage of many vulnerability types), these tools reinforce the concept being discussed: defect identification tools that happen to include security defects, meant for developers and naturally fitting into the developer toolchain/pipeline, are highly consumable and welcome. These tools bring cybersecurity benefits into the developer’s world, rather than force the developer to tread into the world of cybersecurity.

Fixated on Function

So, back to the same organizational question: where to start? The goal is to have engineering teams naturally identify, mitigate, and eliminate security defects as part of ongoing development. Teams are typically already identifying, mitigating, and eliminating general defects as part of their daily work — so why the absence of security defects?

Often it’s because the recognition of a security vulnerability is counter-intuitive to what the developer is trying to achieve: implementing a path to correct operation. With all attention towards making things work and identifying defects that prevent correct operation, vectors involving intentional manipulation to cause alternate concerning behavior are not obvious as a mainline defect present in the logical path to correctness.

Identifying security defects therefore takes explicit attention beyond just “how do I make this work?” Yet developers are already looking past functional correctness and asking questions regarding quality:

  • “This works/looks correct, but does it have the right performance?”

  • “This works/looks correct, but will it scale?”

  • “This works/looks correct, but is it cost-effective to maintain and operate?”

  • “This works/looks correct, but could it cascade errors?”

In that moment, developers are in the right mindset for cybersecurity. They are setting aside their focus on correctness and reviewing aspects that relate to or impact that correctness. This is the phase where we need to inject the security mindset questions, preferably in a security-expertise-agnostic form:

  • “This works/looks correct, but can it be intentionally made to do something concerning?

  • “This works/looks correct, but what about the edge cases and boundary conditions?

  • “This works/looks correct, but what are the ramifications if it fails to operate correctly?

These are not necessarily easy questions to answer, but the questions are simple enough to open the door into investigating defect potentials that align with cybersecurity risks and threats. Security practitioners will recognize this as the launching pad into an important security analysis activity: threat modeling, or more specifically design analysis.

Design Analysis for Developers

SDL and other formal application security programs often include some form of threat modeling or design analysis in the security activity portfolio. Threat modeling approaches can vary from simple to thorough, but all tend to seek the same goal: perform a systematic analysis, from an attacker’s perspective, of viable threats/attacks against your systems, applications, and assets. The output from successful threat modeling can drive defensive prioritizations, architectural requirements, and cybersecurity countermeasures. Bad threat models, on the other hand, can waste time, cause misplaced cybersecurity investment, and lead to overconfidence in security posture. The goal is to right-size the investment in design analysis to produce meaningful and actionable results by development and operations teams that also satisfy the organization’s risk management policies (application security KPIs/OKRs, etc.) — preferably while not requiring the development team to become cybersecurity professionals.

Classic threat modeling and design analysis methodologies, unfortunately, are somewhat esoteric outside of the information security industry (and even within it too!). Specialty assessment methodologies (STRIDE, DREAD, OCTAVE, TARA/TAL, CVSS) , diagramming methods (attack trees, graphs, data flow diagrams/DFDs), and significant attention to attacker motivations and TTPs (ATT&CK and other threat-centric modeling) are clearly security domain expertise items. They are simply not readily usable by a development team without committed learning and practice. Newer design analysis techniques like VAST and process flow diagramming (PFD) are advertised as palatable to developers and Agile-friendly environments, but the tradeoffs made by these techniques can accidentally hide areas of risk consideration the more thorough/classic approaches highlight.

Despite the caveat emptor warnings just made, the design analysis concept is still a great place to start for development teams as long as cybersecurity risk managers realign expectations: the goal is to instigate natural and ongoing application risk analyses as part of existing development efforts, not deliver high-assurance, comprehensive risk reviews of the same caliber as a dedicated cybersecurity professional. It will take time for teams to evolve and grow their natural thinking to include cybersecurity, but that seed must be planted — and it must be planted in a manner where it will take permanent root rather than just be an occasional and shallow dalliance into foreign cybersecurity thinking.

A recommended approach to bootstrapping cybersecurity design analysis in development teams is to provide curated questions expressed in developer defect terms. For example:

  • Will incoming data containing HTML characters be stored and/or rendered back to the user browser, affecting the HTML output?

  • Will incoming data containing reserved data query characters cause database/data queries to malfunction?

  • Will incoming data containing file path characters cause file system operations to malfunction?

  • Does this component trust or depend upon a parent/upstream component to address data sanity/filtering or security for this component? If so, are there ways to bypass that parent/upstream component and directly access this component to violate that sanity checking/filtering or trust?

  • Are there any other ways anything can access this component in runtime besides the officially way it is being used?

  • Is there separate/external configurability for this component? If so, where does that configuration come from, where does it live during runtime, and what can access/change that configuration?

(This list is for illustration purposes and is not comprehensive)

Notice how the questions are all technically expressed in readily consumable, developer friendly language without any mention of application security terms such as XSS, SQL Injection, File Path Tampering, access control bypass, etc. Test driven development (TDD) environments and QA teams can immediately derive test and validation scenarios for components from what was listed. Development teams that are constantly asking and addressing the above questions as quality/defect inquiries are indirectly tackling a large portion of the OWASP Top 10 vulnerability categories without knowing it.

And that is the goal: getting developers to ask and address questions they would naturally want to ask themselves about quality, defects, and correctness of function that also happen to address cybersecurity threats. An organization should augment engineering processes to include this typical lineup of covert risk analysis questions during any amenable engineering phase that happens to deconstruct and review technical components (e.g. architecture, design) — basically turning it into a threat modeling/design analysis session without the formal recognition (and high overhead) of such. The scope of the review should fit to the scope and agility of the engineering effort being considered, be it a minor increment, a full feature, an overlaid use case, or the entire system.

Security program and risk managers may wince at the lack of thorough attention to ongoing comprehensive cyber risk analysis and find it difficult to measure effectiveness without metrics expressed in OWASP Top 10 coverage, CVSS scores, and specific vulnerability classes. But the practical enactment to get there (i.e. gain minimum security expertise -> apply expertise to measure security -> use security metrics & methods to address security defects) is predicated on learning cybersecurity industry particulars simply to have a common understanding and vocabulary when talking about the results. Using developer metrics and methods to address defects particular to security defect areas winds up at the same end state while bypassing the extra investment to make developers into part-time security professionals. It is about time to stop making the engineering mountain come to the cybersecurity industry, and instead bring the cybersecurity industry to the engineering mountain.

We continue to explore the topic of leveraging security design analysis in our whitepaper looking at classic threat modeling methodologies applied to modern cloud and serverless application deployments. You can download the whitepaper here: https://resource.cobalt.io/the-challenges-of-threat-modeling-modern-application

Furthermore, gain an understanding of your security maturity with a free assessment offered by Cobalt. With insights into your security program's maturity, it can empower better decision-making across your security and engineering teams while nurturing a security mindset

New call-to-action

This blog post was written by Jeff Forristal.

Jeff is an expert cyber security technology leader/ advisor known for identifying/creating new technical security service offerings, developing industry-first security product features, and driving research into new industry areas. He has operated as technical contributor, technical director, and executive/CTO for small security product startups to very large Fortune 500 finance and technology organizations. He has a wide background in software + firmware + hardware, technology architecture, operations, and applied security strategy.

Back to Blog
About Cobalt
Cobalt provides Pentest Services via our industry-leading Pentest as a Service (PtaaS) platform that is modernizing the traditional, static penetration testing model with streamlined processes, developer integrations, and on-demand pentesters. The Cobalt blog is where we highlight industry best practices, showcase some of our top-tier talent, and share information that's of interest to the cybersecurity community. More By Cobalt
Pentester Spotlight: Ninad Mathpati
Ninad Mathpati is a Cybersecurity Enthusiast and Hacker with an ethical mindset. He has been working as an Application Security Engineer for 5+ years
Blog
Jul 27, 2022