Interview with Madhur Kathuria

Madhur Kathuria has coached nearly 300 teams for almost 75 clients across the US, Europe, South East Asia, Malaysia and Thailand. In this interview he talks about some of the cultural challenges for agile adoption. Read it here.

Interview with Elena Yatzeck

Elena was Chief Agilist for JP Morgan Chase Treasury Services and is now a VP of Corporate Compliance Tech. Find out how JP Morgan Chase reconciles agile with compliance and risk management demands. Read it here.

Saturday, January 10, 2015

Cloud based apps are extremely vulnerable - here's what to do

And the Two Design Patterns That All Developers Should Know


15 per cent of business cloud users have been hacked.

That is according to a recent Netskope report (article here).

Recent debates about whether cloud storage are secure have focusd on the infrastructure of the cloud: that is, if your data is in the cloud, can other cloud users see it? Can the cloud provider see it?

But there has been little attention to the event more important issue of whether cloud apps themselves are secure. This is so important because if your data is in the cloud, then it is not behind your company’s firewall – it is accessible over the Internet. All one needs is the password. So if you think things were bad before, just wait until hackers shift their focus to the cloud.

Executives think that IT staff know how to
write secure software applications
– but most don’t.

Companies spend huge amounts of money trying to make their infrastructure secure, but they invest essentially nothing in making sure that their application code itself is secure. As I wrote in my book High-Assurance Design,
•    The average programmer is woefully untrained in basic principles related to reliability and security.

•    The tools available to programmers are woefully inadequate to expect that the average programmer can produce reliable and secure applications.

•    Organizations that procure applications are woefully unaware of this state of affairs, and take far too much for granted with regard to security and reliability.

The last bullet is the most important one: Executives of companies think that IT staff know how to write secure software applications – that to do otherwise would be unethical, and their staff are definitely not unethical. But this attitude is the heart of the problem, because the fact is, most software developers – and even most senior software architects – know very little about how to write secure software. Security just isn’t that interesting to most programmers: no one rewards you for writing secure code – not like you get rewarded for writing more features. And no one is asking for it, because there is an assumption – an incorrect one – that programmers create secure code in the course of their work, just as plumbers create well sealed pipes in the course of plumbing. True for plumbers, in general, but not true for programmers.

Recently I was on a DevOps team in which the client was very concerned about security. The client ran its own scans of our servers in the cloud, and found many issues that needed to be fixed. All of these issues were infrastructure related: primarily OS hardening. None had to do with the design of the application. The general feeling of the team was that the security of the application itself would not be questioned, so we did not have to worry about it. At one point, one of our databases in our cloud test environment was hacked. The database was shut down and a forensic analysis was supposedly performed (we were never told what they found). There was no impact on the team’s work – it was business as usual.

If we don’t fix this dysfunction in our industry,
then the Internet Of Things (IOT) will be a disaster.

This state of affairs is unsustainable. If we don’t fix this deep rooted dysfunction in our industry, then the Internet Of Things (IOT) will be a disaster: Imagine having every device you own connected to the Internet – to a cloud service of some kind – and all of these devices and accounts hackable. And imagine the continuous software updates to keep pace with newly discovered security vulnerabilities. This is not a future that I want – do you? Not only is George Jetson’s car a pain to maintain – with constant software updates – but it might come crashing down. People will be afraid to drive their car or use these IOT devices.

The only way to fix this is for organizations to demand that developers learn how to write secure software. You cannot scan for application level security: doing so is not effective. Having a “security officer” oversee things is not effective either – not unless that person intends to inspect every line of code written by every programmer – and that is not feasible in an Agile setting, where the code changes every day. The only way to produce secure software in an Agile environment is to for the programmers to know how to do it.

It is not that there are not lots of resources available for this, my own textbook included. There are tons of books, there are online resources – notably OWASP – and there are even certifications – and these certifications are the real deal: these are not fluff courses.

People like magic bullets. Unfortunately, there is no magic bullet for security: knowledge is the only path. But if I were asked what two things software developers should know to make their code more secure, I would have to say that they should know about these two design patterns: (1) Compartmentalization, and (2) Privilege Separation.

Your systems will be hacked. The only question is,
What will the hackers get away with?

Your systems will be hacked. There is no question about that. The only question is, What will the hackers get away with? Will they be discovered right away through intrusion detection monitoring and shut down? And if not, will they be able to retrieve an entire database of information – all of your customers’ personal data? That is, will one compromised account enable them to pull down an entire complete set of information?

Compartmentalization is an old concept: In the context of computers, it was first formalized by the Bell LaPadula model for security. It became the basis for security in early military computer systems, and it formalizes the essential concept used by the military and intelligence communities for protecting sensitive information. It is based on the concept that a person requesting access to information should have (A) sufficient trust level – i.e., they have been vetted with a defined level of thoroughness – and (B) a need to know: that is, they have a legitimate reason for accessing the information. No one – not even the most senior and trusted person – can automatically have access to everything: they must have a need to know. Thus, if someone needs information, you don’t open the whole filing cabinet: you open only those files that they have an immediate need for. To open others, you have to request permission for those.

Military computing systems are onerous to use because of the layers of security, but in a civilian setting for business applications there are ways to adopt the basic model but make parts of the process automatic. For example, restrict the amount of information that an individual can access in one request: don’t allow someone to download an entire database – regardless what level of access they have. And if they start issuing a-lot of requests – more than you would expect based on their job function – then trigger an alarm. Note that to implement this type of policy, you have to design the application accordingly: this type of security is not something that you can bolt on, because it requires designing the user’s application in such a way that they only access what they need for each transaction and are not given access to everything “in that file cabinet”.

The other key concept that programmers need to know is “privilege separation”. No one should be able to access a large set – e.g., a table – of sensitive data directly: instead, they should have to access a software service that does it for them. For example, if a user needs to examine a table to find out which rows of the table meet a set of criteria, the user should not be able to access or peruse the table directly: the user should only be able to initiate the filter action and receive the single result. The filter action is a software service that performs the required action under the privileged account of the server – which the user does not have access to. The user performs his or her work using an account that is only able to initiate the software service. If the user’s account is obtained through a phishing attack, that account cannot be used to obtain the raw data in the database: retrieving the entire table would require a huge number of calls to the service and intrusion monitoring should be watching for abnormal use such as that. This does not prevent hacking, but it greatly limits what can be lost when a hack occurs.

These measures are not sufficient, but they are a start, and they provide a foundation for how to think about application level security, from which programmers can learn more. The key is to start with an access model based on the kinds of actions that users need to perform and the subsets of data that they need direct access to for each transaction – access is not simply based on their overall level of trust or general need to access an entire class of data.

Organizations are completely to blame for the current state of affairs – and organizations can fix it.

Organizations are completely to blame for the current state of affairs: If organizations demand that programmers know how to write secure code, then programmers will respond. People are merely focusing on what their bosses are telling them is important.

So if you are an executive in an IT organization, it is up to you. The industry will not fix things: You need to make security a priority. You need to tell your teams that you expect them to learn how to write secure code. You need to create incentives for programmers and software architects to become knowledgeable and even certified in secure coding. You need to create a culture that values security. Security is up to you.

No comments:

Post a Comment