The Government’s PII Paradox
Why Secure SaaS Can’t Comply with Undefined Rules
What happens when you're required to implement two-factor authentication—but forbidden from collecting the email or phone number needed to do so?
You get a paradox. One that punishes secure design, stalls innovation, and penalizes vendors for following best practices.
This isn’t hypothetical. It’s pulled directly from our experience delivering commercial SaaS platforms to federal clients. In a recent program review, we were told two things in the same breath:
Two-factor authentication was mandatory.
Collection of PII, including emails and phone numbers, was prohibited.
This contradiction wasn’t malicious. It wasn’t even deliberate. It was the inevitable result of a broken system—one that lacks a shared definition of Personally Identifiable Information (PII), treats all data as equally toxic, and shifts interpretation depending on who you talk to.
Undefined Compliance, Undefined Risk
Federal privacy rules lean heavily on NIST SP 800-122, which defines PII in binary terms: either it is, or it isn’t. But this approach ignores nuance. There’s no operational framework for distinguishing between an email address and a Social Security Number. Both can drag your system into FedRAMP Moderate or High territory.
In practice, the interpretation of PII varies wildly:
ISSMs treat login data as a breach risk.
Contracting Officers focus on liability language.
Privacy Officers apply state-level laws as a precaution.
And because contracts default to vague phrasing like “must comply with all applicable privacy regulations,” ambiguity becomes the default. Vendors are left guessing. The result is over-engineered systems, stalled timelines, and a chilling effect on innovation—especially for smaller vendors.
The Login Layer Trap
Here’s the kicker: your system doesn’t need to process any sensitive business data to fall into this trap. If it has users, and they need to log in, you’re already inside the blast radius.
Email addresses and phone numbers—necessary for verifying identity—are often treated as sensitive PII. Even when hashed, encrypted, or used only transiently, they can trigger compliance reviews, breach response plans, and inflated audit scope.
That’s not security. That’s superstition.
A Case for Levels
We don’t treat all systems the same. We classify them by impact (Low, Moderate, High). We tier security controls by risk. Why don’t we do the same for data?
We propose a levels-based taxonomy for PII:
Level 0 – Operational Metadata (anonymous logs, telemetry)
Level 1 – Pseudonymous Identifiers (tokens, hashed emails)
Level 2 – Contact-Linked PII (emails, phone numbers for 2FA)
Level 3 – Identity-Confirming PII (names, addresses)
Level 4 – Legally Regulated Identity Data (SSNs, biometrics, health/financial records)
This framework brings clarity. It lets product teams isolate risk. It gives program managers and ISSMs a common language. And it creates rational scoping for FedRAMP and other compliance regimes.
Why It Matters
A tiered model isn’t theoretical—it’s operationally necessary. It allows:
Procurement contracts to clarify PII expectations with precision.
Product teams to build modular architectures based on risk.
FedRAMP scoping to reflect reality, not fear.
Smarter compliance, not just more paperwork.
What Comes Next
We’re not calling for another white paper. We’re calling for a working group—with participation from NIST, GSA, FedRAMP PMO, DISA, ISSMs, acquisition leaders, and vendors—to define an enforceable model.
We need a shared definition of PII that maps to risk, clarifies obligations, and aligns with modern software practices.
Without it, secure government tools will remain hard to build, harder to deliver, and prohibitively expensive for those trying to help.