Implementing the Security Rule’s technical safeguards in a web application
Writing a web application that is HIPAA-compliant does not have to be hard. If you already know how to write secure applications, you are probably halfway there. This article outlines what it takes to get you all the way to 100%.
Who Needs to Be Compliant?
Now, you might think that your application does not need to be compliant because you don’t work directly in the medical field. But if your application is used by companies that are required to be compliant, you might have to be also. This is because those companies may be required to choose vendors that are also HIPAA-compliant. Even if not required to do so, they often prefer to work with HIPAA-compliant vendors because it reduces the chance of a failed audit later.
The HIPAA Security Rule
There are many rules that must be followed for a company and software application to become fully HIPAA-compliant. These involve administrative changes, physical safeguards, new policies, documentation, and technical requirements. Since this article is focused on development, I will concentrate on the Technical section of the Security Rule.
The Technical Safeguard Standards of the Security Rule were put into place to protect electronic protected health information (ePHI) and require your application to have the following:
- Access Control
- Audit Control
- Person or Entity Authentication
- Transmission Security
But what do these mean and how do you implement them in a web application? Read through the following steps to find out.
Step 1: Authenticate Users
This step addresses the “Person or Entity Authentication” standard. This standard dictates that only authenticated users should be allowed into the application to ensure the user is who they claim to be.
You probably already have a login screen in your application, so that gets you part of the way there. Just make sure no screens can be hit directly with a URL without the user having first logged in. To be 100% compliant, you also want to make sure the user accounts are protected by strong passwords, the accounts and passwords expire, and all attempted access (successful or failed) to the application is logged.
Encrypting the passwords using a strong one-way hash is also a must. Be sure the same passwords encrypt to different values by using a random salt. Otherwise, the passwords can be cracked easily using a rainbow table attack. I recommend at least PBKDF2-SHA1 for the encryption, but security requirements are constantly changing, so always research the best algorithms to use each time you are starting a new application.
Consider using two-factor authentication in your application if you really want to control access. Two factor uses the traditional username and password, but it also includes a PIN emailed or texted to the user at the time of login. You might also use a hardware token or mobile app to provide the PINs.
Step 2: Authorize Users
This step addresses the “Access Control” standard. This standard dictates that only authorized users should be able to access ePHI and perform the actions they are supposed to perform. This implies some sort of permission system.
Most web applications that involve ePHI are complicated enough that they require more than one role. You might even have some roles that should not have access to any of the protected health information. You can certainly use roles to protect the various parts of your application, but I highly recommend using a permission-based security system instead. Luckily for you, I have already written an article that addresses this: Creating a Permission-Based Security System in .NET.
With a permission-based system, it is easy to protect parts of a screen while still allowing access to the rest. For example, some users might need access to a screen that shows patient information, but they do not need to see the patient’s social security number to do their job. In this case, it is best to show only the last 4 digits (if even that). You can even provide a button that allows them to view the full SSN, but perhaps require a supervisor’s login to see it. The most important thing is that the web page should never contain the full SSN if the user does not have permission to see it. Only the last 4 digits should be sent from the server.
Step 3: Audit Everything
This step addresses the “Audit Control” standard. This standard dictates that all access to ePHI should be audited, stored, and examined periodically for unauthorized or suspicious access. This requires the use of log files and audit tables.
There is one very important rule you must follow when it comes to logging: never write ePHI data to log files. Those are usually not protected as well as the database so they are easier to steal. Use record identifiers (like the primary key numbers) instead when writing debug information to the log file so you do not give away sensitive data.
While log files are good for writing information that helps you track down problems, audit tables are normally used to track things like when a status changes on an entity in your application, who caused the change, and when it happened. Anything that goes through a workflow of some kind, no matter how small, should probably be audited.
In addition, anything security-related must be audited. That includes successful logins, failed logins, password changes, screen/record access, ePHI access (like viewing a full SSN), and updates to ePHI data. This information is critical when tracking down illegal access or identity theft.
Step 4: Track Changes in a History Table
This step addresses the “Integrity” standard. This standard dictates that that ePHI should be protected from improper alteration and destruction. While a lot of this should be handled with access control at the server level, there are some things developers can do to mitigate the possible loss of information.
One such way is to store all field-level changes of ePHI data in a history table, along with who made the change and when. This data could be used to recreate the original data in case of malicious (or unintentional) changes. One good way to do this is by using database triggers.
I do not usually like to use triggers in an application because they can lead to poor performance, they often cause deadlocks, and they are difficult to troubleshoot when problems arise. But they are great for creating history tables. If you create insert, update, and delete triggers that insert the changed data to a history table, it doesn’t matter if a user changed a value from your application or if a DBA ran an update statement directly on the database; the change will still be logged. If you simply create your history records directly in the data layer of your application, you will miss any changes made to the database through scripts.
Since this step is about Integrity, be sure to enforce data integrity as part of your database and application design. Use primary and foreign keys whenever you can. It is also necessary to decouple the patient data from their ePHI. That is, store this data in separate tables (or even separate databases) so that if a hacker steals one set, they cannot match it up with the other set.
For example, store patient information that includes name, address, SSN, etc. in one database and give each patient a unique identifier. Then store medical records for the patients in a separate database and link them using the patient unique identifier. Without both sets of data, a hacker could not tell which medical records belong to which patients.
One other thing I will mention here under the Integrity standard: always encrypt social security numbers (and possibly other ePHI data) in the database using a random-salted, two-way encryption like 256-bit AES (Advanced Encryption Standard). Believe it or not, this is not actually a HIPAA requirement. You could leave the SSNs in plain text if you wanted to, but that is highly discouraged since it leaves you wide open to identity theft.
Step 5: Use a Secure Transmission Protocol
This step addresses the “Transmission Security” standard. This standard dictates that all communication over a network to and from the application should be secure and protected. This is actually the easiest one assuming you are using HTTPS and the TLS or SSL protocol.
In fact, you should require it. Either reject access completely or automatically redirect when a user tries to access the application using unencrypted HTTP. Of course, your site should have a valid, signed certificate from a Certificate Authority like Verisign. This will ensure all communication is encrypted from the user’s browser to the server and back.
For an even greater level of security, you might also choose to encrypt ePHI data on the server and decrypt it on the client (and vice versa). This provides an extra layer of protection, but isn’t strictly necessary.
Finally, never send sensitive data like passwords and ePHI values in the querystring of the URL (using a GET request). While this data will be encrypted when sent over HTTPS, the values are stored in your browser history and in server logs, which exposes them to possible unauthorized access. Always use a POST request when possible.
Writing a web application that is HIPAA-compliant does not have to be hard. Most of the items above are quite easy to plan and implement when considered from the very beginning. Think about using these steps for your other applications as well in order to make them more secure. There is no such thing as too much security.
–By Jon Hester, Senior Software Developer and Architect at Kopis
Jon Hester is a senior software engineer and architect at Kopis. He has been with the company for over 10 years and specializes in business analysis, user interface design, and complex problem solving. In his spare time, he enjoys computer gaming, reading, activities with his kids, and playing practical jokes on his coworkers.