ketchalegend
← Back

Why I Redesigned My Auth Using 'Parse, Don't Validate'

Validation scatters checks across modules and lets bad data slip in. Parsing at the boundary into typed values eliminates an entire class of auth bugs — here's how I rebuilt fittrack's auth flow around that idea.

This week, while working on the auth page redesign and security analysis for fittrack, I found myself revisiting one of my favorite software design principles: Parse, Don't Validate.

The idea is simple: rather than taking raw input and checking it later (validation), parse it into a well-structured representation as early as possible — and use the type system to guarantee correctness from then on. It's a concept that's been floating around for years, but seeing it pop up on Hacker News today in Derek Rodriguez's C++ retrospective reminded me how much it applies to everyday development.

The Problem with Validating Auth Input

In our fitness tracker app, we support multiple authentication methods: email/password, OAuth (Google, GitHub), and magic links. Each has its own quirks — email normalization, token expiry checks, redirect URI validation. Originally, I had a typical flow: grab the raw request, validate fields with if-statements, then pass the sanitized data to business logic. Sound familiar?

But validation is fragile. You might check for a valid email format, but later another module might assume the email is already normalized and lowercase. Or you might validate a token's signature, but then a middleware logs the token in plaintext because it was never parsed into a safe type. The deeper your validation is buried, the easier it is to skip or misuse.

Parsing at the Boundary

In the redesign, I now parse all auth input at the HTTP boundary. For example, the sign-in endpoint accepts a JSON body, but before any controller runs, a middleware parses it into a strict Rust enum (using serde and custom deserialization) that represents the auth method and its validated parameters.

#[derive(Deserialize)]
pub enum AuthRequest {
    Password {
        #[serde(deserialize_with = "normalize_email")]
        email: Email,
        password: Password,
    },
    OAuth {
        provider: OAuthProvider,
        code: AuthorizationCode,
        redirect_uri: RedirectUri,
    },
    MagicLink {
        token: MagicLinkToken,
    },
}

Notice that Email, Password, AuthorizationCode, etc., are newtypes with their own invariants (e.g., Email always lowercase and trimmed). Once the request is parsed, the rest of the system never sees raw strings — it only works with these domain types. This eliminated an entire class of bugs: misformatted emails stored in the DB, tokens being logged, or redirect URIs being left unvalidated.

Security Analysis Became Easier

When I ran the security analysis on the new auth flow, the principle made my life much simpler. Because parsing is centralized, I could audit exactly one location for injection attacks, buffer overflows (unlikely in Rust, but still), or logic errors. The type system enforced that once a value was parsed, it could be trusted. No more wondering if a validation was missed somewhere in a chain of callbacks.

For instance, our OAuth callback endpoint used to validate the state parameter in three separate places (middleware, controller, service). Now it's parsed once into a StateToken type that includes a cryptographic check during deserialization. If parsing fails, the request is rejected before any business logic runs. The code is shorter, clearer, and auditable.

Real-Time Dashboard Benefits Too

This principle isn't limited to auth. In the realtime dashboard branch, we ingest workout data from devices. Those devices send raw JSON with timestamps, heart rate arrays, and GPS coordinates. By parsing them into strongly typed WorkoutSession and HeartRateSample structs at the WebSocket boundary, we ensure that all downstream processing (anomaly detection, leaderboard updates) works with clean data. If a device sends a bad reading, the parse fails and the bad frame is dropped immediately, rather than corrupting a running average.

The Takeaway

"Parse, don't validate" isn't a silver bullet, but it's a force multiplier for code quality and security. By pushing validation to the edges and using types to represent parsed data, you reduce cognitive overhead, eliminate entire classes of errors, and make your codebase easier to reason about.

If you haven't read the original essay by Alexis King (it's in Haskell, but the lesson is universal), check it out. And for a deep dive into C++ applications, Derek's article is excellent.

This week's redesign taught me that the principle scales from a toy example to a production auth system. Next time you find yourself writing if (email != null && email.contains("@")) — stop. Parse it. Your future self (and your users) will thank you.

P.S. The fittrack branches are public if you want to peek at the code: auth redesign and security analysis.