Security teams do not struggle only with the number of vulnerabilities. They struggle with deciding which ones matter first. CVSS helps create a common language for severity, but it does not answer every prioritization question on its own.
CVSS Definition
The Common Vulnerability Scoring System is an open standard for describing the characteristics and severity of software vulnerabilities. Its main purpose is to provide a repeatable and standardized way to score vulnerabilities so that different teams can compare findings more consistently. CVSS is a severity framework, not a direct measure of business risk.
That distinction matters in practice. A vulnerability can have a high CVSS score but lower operational urgency if the affected asset is isolated, well-protected, or low value. The opposite can also happen: a medium score may deserve immediate action if the asset is internet-facing, business-critical, or already tied to active exploitation.
Who Maintains CVSS
CVSS is owned and managed by FIRST, the Forum of Incident Response and Security Teams. FIRST maintains the framework, publishes the official specification and calculators, and updates the documentation as the standard evolves.
Who Uses CVSS
CVSS is widely used across vulnerability management workflows. The NVD provides CVSS enrichment for published CVE records, and security vendors, product maintainers, and vulnerability management platforms use CVSS as a common severity language in advisories, dashboards, and remediation workflows.
CVSS v3.1 and v4.0 Context
It is important to distinguish between CVSS v3.1 and CVSS v4.0. FIRST states that CVSS is currently at version 4.0, while v3.1 is archived. In CVSS v3.x, the metric groups are Base, Temporal, and Environmental. In CVSS v4.0, the structure changes to Base, Threat, Environmental, and Supplemental.
NVD now officially supports CVSS v4.0 and provides v4 calculators and CVE-page fields, but readers should know that NVD and other sources may not always display the same kind of v4 result. In practice, many records still show only partial enrichment, and some pages display contributed scores from CNAs rather than a completed NVD v4 assessment.
Table 1. CVSS v3.1 vs CVSS v4.0
|
Area |
v3.1 |
v4.0 |
Why readers should care |
|
Metric groups |
Base, Temporal, Environmental. |
Base, Threat, Environmental, Supplemental. |
v4.0 separates time-sensitive exploitability into a dedicated Threat group and adds Supplemental context, so modern scoring is more explicit about what kind of context was included. |
|
New Base metric |
No Attack Requirements metric. |
Adds Attack Requirements (AT). |
This helps describe whether exploitation depends on deployment or execution conditions, which makes the score easier to interpret in real attack scenarios. |
|
User Interaction |
None or Required. |
None, Passive, or Active. |
v4.0 gives analysts a more precise way to describe how much user participation exploitation actually needs. |
|
Score naming |
Usually discussed as Base, Temporal, and Environmental scores, without the newer short-form naming convention. |
Uses explicit labels such as CVSS-B, CVSS-BT, CVSS-BE, and CVSS-BTE. |
This makes it clearer whether a displayed score includes only Base metrics or also Threat and Environmental context. |
|
Scope modeling |
Includes a dedicated Scope (S) metric. |
Retires Scope as a standalone metric and instead separates impacts to the vulnerable system and subsequent systems. |
Readers should not expect a one-to-one mapping between v3.1 and v4.0 vectors. Some concepts were redesigned rather than simply renamed. |
|
Threat context |
Uses Temporal metrics such as Exploit Code Maturity, Remediation Level, and Report Confidence. |
Uses a dedicated Threat metric group centered on Exploit Maturity (E). |
v4.0 simplifies time-sensitive exploitability context and makes it easier to distinguish intrinsic severity from changing threat conditions. |
|
Default scoring logic |
Base is often the only public score, while Temporal and Environmental are optional refinements. |
Base, Threat, and Environmental are always part of the final calculation, even when Threat or Environmental values are left as default “Not Defined”. |
This is why v4.0 score labels matter: they show which metric groups were actually populated by the scorer. |
Because many existing tools and remediation workflows still display CVSS v3.x scores, it is still useful to understand v3.1. At the same time, any modern explainer should note that v4.0 is now the current standard.
How CVSS Scoring Works
A CVSS assessment produces both a numeric score and a vector string. The numeric score falls on a 0.0 to 10.0 scale. The vector string records the metric values that produced the result, which makes the assessment transparent and reproducible.
For many analysts, the vector is more useful than the label. A score of 8.8 tells you the issue is severe. A vector tells you why: whether exploitation is remote, whether user interaction is required, and whether prior privileges are needed.
- In CVSS v3.1, the score is built from three metric groups: Base, Temporal, Environmental.
- In CVSS v4.0, the score uses four metric groups: Base, Threat, Environmental, Supplemental.
The most important practical point is this - CVSS gives you technical severity. It helps prioritize work, but it does not replace business context, threat intelligence, or asset criticality.
Base Metrics
In CVSS v3.1, the Base metric group captures the intrinsic characteristics of a vulnerability that remain constant over time and across environments. The Base metrics are:
- Attack Vector (AV)
- Attack Complexity (AC)
- Privileges Required (PR)
- User Interaction (UI)
- Scope (S)
- Confidentiality (C)
- Integrity (I)
- Availability (A)
These metrics reflect both exploitability and impact. The Base score is the foundation of the assessment and is often the only score published by public sources such as NVD.
Temporal Metrics in CVSS v3.1
Temporal metrics in CVSS v3.1 adjust the Base score based on factors that change over time. These are:
- Exploit Code Maturity (E)
- Remediation Level (RL)
- Report Confidence (RC)
They are useful when exploit code becomes public, when a fix is released, or when confidence in the technical details changes. However, NVD does not currently publish Temporal assessments, so organizations need to apply these adjustments themselves if they want a more current v3.1 score.
If your team is moving toward CVSS v4.0, note that v4 replaces the old Temporal group with a Threat group and introduces clearer naming for combinations such as CVSS-B and CVSS-BT.
Environmental Metrics
Environmental metrics in CVSS v3.1 adapt the score to a specific organization or asset. They include security requirement metrics such as Confidentiality Requirement (CR), Integrity Requirement (IR), and Availability Requirement (AR), as well as modified Base metrics such as MAV, MAC, MPR, MUI, MS, MC, MI, and MA.
These metrics help answer a practical question: "How severe is this vulnerability in our environment, on this asset, under our controls and constraints?" That makes the Environmental score much more useful for operational prioritization than the Base score alone. Still, it should be described as environment-specific severity, not as business risk in the full sense.
How a Vector String Builds a Score
A CVSS vector string is the compact representation of the metric values used to derive a score. In CVSS v3.1, a vector might look like this:
CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:L/A:N
- AV:N - exploitable over the network
- AC:L - no unusual attack complexity
- PR:H - attacker needs high privileges
- UI:N - no user interaction required
- C:L/I:L/A:N - limited confidentiality and integrity impact, no availability impact
This format shows exactly which metric choices were made, which is why the vector string is often more useful than the score alone when technical teams need to review or challenge an assessment.
Case example: why the label alone is not enough. NVD shows CVE-2021-44228 with a CVSS v3.1 Base score of 10.0 Critical, while related Log4j-family issues such as CVE-2021-4104 and CVE-2021-44832 are scored lower at 7.5 High and 6.6 Medium. This is a useful reminder that even closely related vulnerabilities can have meaningfully different exploit conditions and impact, so teams should review the vector and not just the shared product family or the headline severity.
Base Metrics and Their Impact on the Score
The Base metrics in v3.1 drive the score in different ways:
- Attack Vector describes how remotely the vulnerability can be exploited.
- Attack Complexity reflects how difficult successful exploitation is.
- Privileges Required measures what level of access an attacker needs.
- User Interaction captures whether another user must do something for exploitation to succeed.
- Scope determines whether exploitation can affect resources beyond the original security boundary.
- Confidentiality, Integrity, and Availability measure the potential impact on data and systems.
Together, these metrics describe how easy exploitation is and how damaging it could be if successful.
Real-world reading tip: a network-exploitable issue with no privileges and no user interaction will usually score differently from a flaw that requires local access, prior privileges, or a targeted user action. That is why reading the vector matters more than reading only "High" or "Critical."
Temporal and Environmental Adjustments
Temporal and Environmental metrics are meant to refine the Base score, not replace it. In v3.1, Temporal metrics help reflect changing exploitability and remediation status over time, while Environmental metrics tailor severity to a particular organization. This is exactly why relying on the Base score alone can produce poor remediation priorities.
When to Use Temporal Metrics
If you still work with CVSS v3.1, Temporal metrics are useful when a vulnerability’s situation changes. For example:
- public exploit code appears;
- active exploitation is observed;
- a patch or workaround becomes available;
- confidence in the vulnerability report changes.
These factors can change the practical urgency of a vulnerability even when the Base score stays the same. NVD offers calculators for this, but not published Temporal assessments.
Tailoring the Environmental Score to Your Organization
Environmental scoring becomes important when the same vulnerability affects assets with very different business value or control coverage. A vulnerability on a public-facing payment system may deserve a much higher internal priority than the same vulnerability on a low-value internal test asset. Environmental metrics let teams reflect that difference in a structured way.
A practical example: the same vulnerability on a public-facing payment system and on a low-value internal test system should not create the same remediation priority. FIRST’s implementation guidance makes the same point: scanner-provided Base scores may be correct, but without environment context they can be incomplete or misleading for organizational prioritization.
Why Context Matters in Prioritization
The most common operational mistake is to treat the published Base score as the final answer. CVSS helps you understand severity, but actual prioritization should also include exploitability in the wild, asset criticality, business exposure, and compensating controls. That is why KEV data, threat intelligence, and asset context matter so much in real remediation workflows.
CISA says organizations should use the KEV Catalog as an input to their vulnerability management prioritization framework. That means a vulnerability’s urgency should rise when there is evidence of active exploitation, not only when the Base score looks severe.
Severity Levels and Score Ranges
Table 2. CVSS tells you this - CVSS does not tell you this
|
CVSS tells you this |
CVSS does not tell you this |
|
The technical severity of a vulnerability on a standardized scale. |
The full business risk to your organization. |
|
How the vulnerability can be exploited, such as whether it is network-reachable, requires privileges, or needs user interaction. |
Whether the affected asset is business-critical, customer-facing, or low value in your environment. |
|
The expected impact pattern on confidentiality, integrity, and availability. |
Who owns the asset, who should fix the issue, or what remediation SLA makes sense internally. |
|
A transparent vector string that shows how the score was derived. |
Whether the vulnerability is already being actively exploited against organizations like yours. |
|
A consistent language for comparing vulnerability severity across advisories, scanners, and workflows. |
Whether compensating controls, segmentation, monitoring, or other local conditions already reduce practical urgency. |
|
A useful input for triage and prioritization. |
The final patch order by itself, without threat intelligence, exposure, and asset context. |
NVD publishes qualitative severity ranges for CVSS. For CVSS v3.x and v4.0, the ranges are:
- None: 0.0
- Low: 0.1-3.9
- Medium: 4.0-6.9
- High: 7.0-8.9
- Critical: 9.0-10.0
These ranges help teams communicate urgency more consistently, but the underlying vector still matters more than the label alone.
Rounding and Representation Notes
CVSS uses a specific Roundup function rather than ordinary rounding. In the v3.1 specification, Roundup means the smallest number to one decimal place that is equal to or higher than the raw value. For example, 4.02 becomes 4.1. This is one reason different tools can sometimes appear inconsistent if teams do not understand the exact calculation method or the CVSS version being used.
Where CVSS Is Used in Practice
In day-to-day security operations, CVSS is used for triage, remediation planning, SLA setting, and reporting. It gives analysts, engineers, and stakeholders a shared technical language for discussing vulnerability severity. NVD also notes that CVSS is well suited to prioritization workflows and severity comparisons across industries and organizations.
One practical workflow looks like this:
- Read the vector, not just the label.
- Check whether the asset is exposed and business-critical.
- Look for KEV or other exploitation evidence.
- Add environment context before assigning the final SLA.
Combining CVSS with Threat Intelligence and Asset Criticality
A high CVSS score does not always mean the issue is the most urgent item to patch first. Another useful complement is EPSS. FIRST describes EPSS as a probability score that estimates the likelihood of exploitation activity in the next 30 days. It should not replace CVSS, but it can help teams compare technical severity with likely exploitation pressure.
Likewise, a medium score should not automatically be ignored. If a vulnerability appears on a known exploited list or is affecting a business-critical external system, it may deserve immediate attention even if the Base score alone does not look catastrophic. That is why mature teams combine CVSS with threat intelligence and asset criticality instead of using CVSS in isolation.
TopScan uses CVSS as a common severity language inside its vulnerability management workflow for internet-facing assets such as websites, APIs, and exposed services. In practice, CVSS helps standardize the initial severity of CVE-linked findings, but it is not treated as the final priority on its own. TopScan adds operational context to help teams triage issues faster, assign realistic SLAs, and track remediation through a single workflow.
Common Pitfalls and Practical Tips for Using CVSS
The most common mistakes include treating CVSS as a direct measure of business risk, relying only on the Base score, ignoring version differences between v3.x and v4.0, and forgetting that NVD typically publishes Base metrics only.
Another common mistake is to compare scores across versions as if they were interchangeable. CVSS v3.1 and v4.0 use different metrics and formulas, so teams should always confirm which version a tool, scanner, or advisory is showing before making remediation decisions.
A more practical approach looks like this:
- Use the full vector string, not just the label.
- Check which CVSS version your tool is using.
- Add threat intelligence and asset context before setting remediation priority.
- Use Environmental metrics when your organization needs a contextualized severity score.
- Keep scores and priorities updated as exploitability and remediation status change.



