Software security is hopelessly broken
Blaine Osepchuk
Posted on January 6, 2018
As software developers, we are doing a terrible job of protecting the data we collect from our users because software security is hopelessly broken. This is a huge topic so I'll restrict my comments to coding, encryption/hashing, web server configurations, regulation, and what we can do about the security of the software we create and maintain.
Programming needs to be significantly safer by default
We're failing on the easy stuff. We are guilty of hard coding API passwords in our code bases and then posting it on github, insecurely storing user passwords, writing code that's vulnerable to injection and overflow attacks, failing to properly validate data before using it, not protecting our backups, not deleting data we no longer need, etc., etc..
I bought a book on secure programming around 2002 and all the risks identified in that book are still very much with us today. We've barely moved the needle at all in the last 15 years!
The only way we are going to make significant progress on software security issues is to make programming safer by default. Trying harder hasn't worked and is unlikely to work for the vast majority of projects in the future.
Sure, it would be great if every developer had security training and lived by the programmer's oath. It would definitely help. I wrote quite a popular post about software professionalism and security if you're interested.
The problem is that we already have a thousand things to remember every time we write a line of code and it's naive to think that humans (with our pitiful working memory of 5 +/-2) will every remember to do everything right all the time. (Or that your boss will let you take another month to review your code for security problems before you release it.)
Secure programming in C is basically impossible
Have you ever looked at the security guidelines for properly range checking an array index in C? Not fun. Who's going to get that 100% correct every time? If you write a significant project in C, you are going to have trouble ensuring that you never get an array index out of bounds, or an overflow, or a null pointer, etc..
You can staple copies of the MISRA C standard to your developer's foreheads and review every commit 5 times but you'll still miss piles of errors.
What's safer look like?
- computer languages that have automatic bounds checking
- database abstraction layers that automatically escape inputs to prevent SQL injection attacks
- templating engines that automatically escape output by default to prevent cross-site scripting
- form builders that automatically add and check for a unique token on every submission to prevent cross-site request forgery
- data validators that make it easy to prevent injection attacks
- web frameworks that have well designed and tested authentication and authorization capabilities
- tools that allow software developers to statically and dynamically examine their code and report a prioritized list of problems
- security scanners that are easy to use
These things work because you get the security benefits for free (but only if you actually use them). Secure coding has to be automatic and free if we expect it to work.
Password hashing and encryption need to be idiot-proof
We need simple ways to get the benefits of the most-up-to-date programming practices without becoming experts. PHP developers have actually done some impressive work in this area.
Secure password hashing
For example, password hashing in PHP is now simple to use and strong by default. PHP has three functions in the core that do everything you need to securely store and verify passwords. We upgraded one of our websites in a couple of hours. So, PHP now takes care of the salting and secure hashing of our passwords. Our code will even upgrade our hashes automatically in the future when something better comes along.
Here's the best part: people using PHP's secure hashing functionality don't need to understand security best practices, salting, rainbow tables, or the difference between md5 and sha-256. And that's the way it should be.
Secure application level encryption
Application level encryption should be dirt-simple to use. Anybody should be able to call encrypt($cleartext, $key) and decrypt($cyphertext, $key) and know that it's secure without understanding anything about how encryption works.
If you're an expert go ahead and use the lower level functions. But most of us just need to encrypt a string and store it safely so we can decrypt it later. So just give us something safe to use and we'll use it. Encryption isn't quite as easy to use as password hashing in PHP but it's getting close. Check out this implementation (scroll down for example code). I imagine simpleEncrypt() and simpleDecrypt() or something similar will eventually make it into the PHP core.
Servers need to be easier to configure and more secure by default
Have you ever tried to setup a web server and make it secure? I have, and it's not fun on Windows or Linux. The level of knowledge you need to do this well is insane. But even if you do manage to create what you believe is a "secure" configuration, you have no guarantees that your server will remain secure tomorrow or next week.
What would be better? Imagine if Apple developed the GUI for a web server OS that was built to the security standards of the OpenBSD project. This is out of my wheelhouse so forgive me if I say something silly.
Here are some features I'd like to see in a secure web server OS:
- it's easy to see the configuration of the system and how it has changed over time (and who changed it)
- the server monitors the behavior of logged-in users and reports anything suspicious (along with a recording of everything they did and saw during their session)
- it's easy to see if someone is attacking your system, how they are attacking it, and what the OS is doing to stop the attack from succeeding
- the server should contact the admin if it needs help defending itself from an attack (and suggest actions the human should take)
- the OS should only allow software it thinks is safe to be executed (I know this is very challenging in practice but I can dream)
- configuration changes are made through wizards (or scripts) and the system won't allow you to make silly configuration mistakes (like creating an ssh account with an easily guessed password)
- the OS should monitor how it is used and suggest or automatically turn off unneeded functionality
- the OS should automatically install signed updates without requiring a reboot but allow rollback if necessary (or have a configurable update policy)
- built-in encrypted full backups with one click restores
- the OS should be wary of new hardware and anything plugged into its USB ports
- the file system is encrypted by default
- the OS uses address space layout randomization by default
- multiple servers can be managed from a single interface with ease
- the server should fail safely (never reveal sensitive information about itself or its data)
- the OS should be able to run a self-test and tell you all the places it can be accessed/exploited
- the OS should learn from the successes and failures of other systems to improve its security and performance (like anti-virus software does today)
- all firmware is cryptographically signed
I know this stuff is easier said than done but you can't dispute the fact that there's lots of room for improvement here. There's also no shortage of controversy around making computing safer. In many ways freedom and flexibility are at odds with security.
New regulations are going to force us to change the way we design and construct software
I'm interested to see what is going to happen to the software world when the EU's new data protection regulations come into effect on May 25, 2018. These regulations are specific and the penalties for not complying with them are steep but the details of how it's going to be enforced are still unclear. I'd be surprised if 2% of the software in the wild that contains user data complies with these regulations. And making your existing software compliant is going to be expensive.
Plus, this is just the beginning of the regulation of non-safety critical software. I predict more and more regulation will be thrown at us as people get tired of data breaches and the damage caused by our crappy software. People will seek government protection.
I also wonder when insurance companies are going to routinely set premiums for businesses based on what kind of software they develop and how carefully they develop it.
It should be interesting to see how it all turns out.
Okay, software security is hopelessly broken. What happens next?
I believe we'll get slightly better at writing secure software in the coming years. But the bad guys will continue to steal our data with ease.
We'll use safer languages, better tools, and incrementally better software engineering practices to create software that offers our users slightly more protection (like testing, static analysis, and code reviews). Big companies like Google, Microsoft, and Facebook will do a better job of writing secure software than small companies. Apps and IOT devices will remain an absolute disaster area but almost all software will remain vulnerable because, like I've said before, software security is hopelessly broken.
There are just too many ways make a programming or configuration mistake, trick you into defeating your own security, or attacking your system at another level (network, router, OS, hardware, firmware, physical, etc.).
Plus there are billions of lines of code out there that will never and can never be upgraded to be secure because:
- the software has been abandoned
- of the expense of modifying existing software is prohibitive
- it's basically impossible to add effective security controls to an existing insecure system
- we don't have enough security experts to go around
- there's no money in fixing it
Conclusion
Here's the thing: our entire modern computing apparatus held together with duct tape. There is no bedrock layer in computing that we could drop down to and say "everything below this point is safe so we're just going to rebuild from this level."
Nothing like that exists. We could employ security experts to redesign/rewrite everything from scratch (hardware, firmware, OS, and applications, protocols, etc.) with insane attention to detail. But we don't yet know how to make complex software without errors, certainly not at the scale we are talking about here.
Plus, what are you going to do about the people? They're part of the system too. Remember, the bad guys can just drug you and hit you with a wrench until you give up all your passwords.
And you also have to worry about physical security as well because someone could slip a key logger between your keyboard and your computer. Or remotely read the EMF off your keyboard (it works with wired keyboards too). Or just install a small camera in your room near your computer and take videos of your screen. Or activate your webcam and read your screen in the reflection in your glasses. Or any of a million other things.
Nope. The truth is that software security is hopelessly broken.
What can you do?
- keep your software up to date--security updates are the best defense you have
- comply with all applicable laws and regulations such as GDPR, HIPPA, PCI-DSS, etc.
- educate yourself about security best practices, the tools, and the languages available to you to help you reduce the cost of writing secure software
- use a risk mitigation strategy to protect your most sensitive data first because you can't fix everything all at once
- allocate time to fix lower priority security issues because they are never going to fix themselves
- raise awareness about your security issues by talking about them with your coworkers (both programmers and non-programmers)
What do you think? Do you believe software security is hopelessly broken? I’d love to hear your thoughts.
Posted on January 6, 2018
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.