Technical

Compliance Shortcuts That Ruin Healthcare Apps (And How to Avoid Them)

Most compliance failures in healthcare software are not the result of malicious decisions. They are the result of engineers building the fastest working thing without understanding what HIPAA-aware actually requires — or founders who treated compliance as a checkbox rather than a set of concrete engineering requirements.

The shortcuts are almost always the same. This article names them specifically so you can avoid them.

What "HIPAA-Aware" Actually Means in Practice

HIPAA compliance is commonly described in terms of the HIPAA Security Rule and Privacy Rule. These are real requirements, but they are written in legal language that does not translate directly to code.

"HIPAA-aware" in practice means:

  • Protected Health Information (PHI) is encrypted at rest and in transit
  • PHI is accessible only to authorized users, enforced at the API layer
  • Every access to PHI is logged with sufficient detail to answer "who accessed what, when, and why"
  • Business Associate Agreements (BAAs) are in place with every vendor that touches PHI
  • There is a formal incident response plan for breaches
  • Access is revoked promptly when staff leave or change roles

What it does not mean: using a specific cloud provider, having a specific certification, or writing a compliance policy document. Compliance without the technical controls is theater.

Shortcut 1: Storing PHI in Application Logs

This is the most common mistake and the one that is hardest to find after the fact.

Application logs exist to help engineers debug problems. Engineers debug by logging everything interesting — request parameters, response bodies, user identifiers, error messages. In a healthcare application, those request parameters and error messages frequently contain PHI: patient names, dates of birth, diagnosis codes, medication names.

Once PHI is in your logs, it is in your log management system (Datadog, Papertrail, CloudWatch, wherever), which may not have a BAA in place and almost certainly has less access control than your application database.

What this looks like:

// This logs PHI without the developer realizing it
logger.info('Processing appointment', { request: req.body })

// req.body contains patient_name, dob, diagnosis_code...

Error logging is especially risky. When a request fails, the error handler often dumps the full request context — which includes the patient data that caused the error.

How to avoid it:

  1. Establish a rule before code is written: logs contain user IDs and record IDs, never names, dates of birth, diagnosis codes, or other PHI
  2. Log PHI-related operations at the audit log level (separate from application logs, with appropriate access controls), not the debug log level
  3. Review your logging configuration with this lens before deployment

How to spec it: The technical specification for the project should include a logging policy with explicit examples of what is and is not allowed in logs.

Shortcut 2: No Audit Trail

HIPAA requires an audit trail. Specifically, you must be able to demonstrate who accessed or modified PHI, and when.

What "no audit trail" looks like:

  • A doctor can view any patient record in the system, and there is no record that they did
  • An admin can update a patient's information with no record of what changed
  • A user can export data and there is no log of the export

An audit trail is not the same as application logs. It is a purpose-built record of access and modification events with:

  • The user who performed the action (user ID, role)
  • The record accessed or modified (type and ID)
  • The action (read, update, delete, export)
  • A timestamp
  • Enough context to understand what changed (for mutations, before and after values)

How to implement it:

Build an audit_events table from the start. Write to it for every PHI access and modification. This is not a suggestion to log every database query — it is a targeted log of meaningful events.

For read access, the trigger is typically viewing a record, not the database SELECT itself. Implement audit logging in the service layer, where business actions happen.

audit_events:
  - id
  - user_id
  - action (VIEW_PATIENT, UPDATE_RECORD, EXPORT_DATA, etc.)
  - resource_type
  - resource_id
  - metadata (JSON — additional context)
  - created_at

Retrofitting an audit trail into an existing application requires instrumenting every code path that touches PHI. It is significantly cheaper to build it at the start.

Shortcut 3: Using Non-BAA Cloud Services for PHI

A Business Associate Agreement (BAA) is a contract in which a vendor agrees to handle PHI in accordance with HIPAA. Without a BAA, using a vendor to process or store PHI is a HIPAA violation, regardless of how secure their infrastructure is.

Common places where BAAs are missed:

  • Log management (Datadog, Papertrail, Logtail) — if PHI reaches your logs and logs go to one of these services, you need a BAA with them
  • Email providers (SendGrid, Mailgun, Postmark) — if you send any PHI in email (even appointment reminders with patient names), you need a BAA
  • Error tracking (Sentry) — if PHI appears in error context, Sentry receives PHI and needs a BAA
  • Analytics (Mixpanel, Amplitude, PostHog) — if user events include PHI fields, you need BAAs

Many of these vendors offer BAAs on enterprise or higher-tier plans. Some do not offer them at all — meaning you cannot use them for applications that process PHI, regardless of how useful they are.

How to spec it: Before selecting any third-party service, explicitly decide whether PHI could reach that service and verify BAA availability. This decision must be made before integration, not after.

Shortcut 4: RBAC Not Enforced at the API Layer

Role-based access control (RBAC) in healthcare applications is frequently implemented in the UI and skipped in the API. The result: a user whose UI does not show them a button can still make the API call that button would have made, and the API will honor it.

This is not a theoretical attack. It requires only that a user with a lower-privilege role use a browser's developer tools to observe and replay API requests. No hacking required.

What this looks like:

A nurse role should not be able to view billing information. The billing tab is hidden from nurses in the UI. But the /api/patients/:id/billing endpoint does not check whether the requesting user has the billing viewer role — it only checks that the user is authenticated.

How to fix it:

Every API route that accesses PHI must enforce the specific permission required, not just authentication. This is not one check at the top of a controller — it is a permission check for every sensitive operation.

In practice:

  • Define permissions clearly in a constants file (not scattered as strings throughout the code)
  • Write a middleware or decorator that enforces a named permission
  • Apply that middleware to every relevant route
  • Test with requests from accounts that do not have the permission

Shortcut 5: Insecure File Storage

Healthcare applications frequently handle file uploads: lab results, imaging documents, insurance cards, signed consent forms. These files contain PHI and require the same protections as database records.

Common mistakes:

  • Files stored in a publicly accessible S3 bucket (or equivalent)
  • Files stored with sequential or guessable keys (uploads/patient_1042_labresult.pdf)
  • File URLs embedded in email or returned in API responses without expiry
  • No virus scanning on uploaded files

The correct approach:

  • Store files in a private bucket with no public access
  • Generate file keys that are random UUIDs, not predictable identifiers
  • Serve files via pre-signed URLs with short expiry windows (15–60 minutes depending on use case)
  • Implement file download logging in the audit trail
  • Run virus scanning on upload

Pre-signed URLs are the mechanism that makes private storage usable. A pre-signed URL grants temporary access to a specific file to anyone who has the URL, without requiring the downloader to have cloud credentials. They expire, so a leaked URL eventually becomes useless.

Shortcut 6: Missing Session Timeout

HIPAA requires that sessions are terminated after a period of inactivity. The specific timeout is not mandated — it should be defined in your organization's security policy — but "never" is not an acceptable answer.

In practice:

  • Browser sessions should timeout after 15–30 minutes of inactivity for clinical applications
  • Mobile sessions may have longer timeouts but should require re-authentication for sensitive operations
  • The timeout should be enforced server-side, not just by clearing a cookie client-side

Enforcing timeout server-side means the access token (or session) must have an expiry, and the server must reject expired tokens even if the client still holds them.

How to Spec Compliance Requirements Before Build Starts

Compliance requirements are not separate from technical requirements — they constrain every part of the system. The time to define them is during technical discovery, before any code is written.

A pre-build compliance specification for a healthcare application should define:

  1. PHI inventory — what data fields are PHI, where they are stored, and which systems process them
  2. BAA requirements — list of all third-party services with PHI access and BAA status
  3. RBAC matrix — a table of roles × permissions for every sensitive operation
  4. Audit log requirements — list of events that must be audited, with required fields for each
  5. Encryption requirements — at-rest encryption method, in-transit requirements, key management
  6. Session policy — timeout duration, re-authentication requirements for sensitive operations
  7. File handling policy — allowed file types, storage configuration, URL generation, retention

This document does not need to be long. It needs to be explicit. "We will encrypt PHI at rest" is not a specification. "PHI is stored in a PostgreSQL database with encryption at rest enabled on the RDS instance, and file attachments are stored in a private S3 bucket with SSE-S3 encryption" is a specification.

The specification also needs to define how compliance will be verified — ideally a checklist that gets reviewed before deployment, not a promise that it was handled.


The Yellow Labs builds healthcare applications with these requirements built into the development process from the start. Compliance is not a phase that happens after the build — it is a set of constraints that shapes every technical decision.

If you are planning a healthcare application, our healthcare platform development services include a technical discovery phase that produces exactly this kind of specification.

Talk to us about your project

Senior engineers, honest scoping, and hourly billing. No fixed-price guesses on work we haven't understood yet.

Start a conversation
← Back to Blog