We are all aware of ever increasing weight of assaults on IT systems. From simple spam through to complex Stuxnet the problem is ever changing, ever increasing.
And the malefactors are evolving faster than the benefactors.
So what do we do?
I have been reading many reviews of security issues in 2011. Diverse sources that I think are worth anyone interested in security reviewing are list in the references at the end of the article.
One thing that seams to pop up a lot is that in the benefactors circle not a lot changed.
Perhaps we are all a little too complacent that our existing solutions will be capable of evolving appropriately and quickly to new threats? Perhaps benefactors are more concerned about the financials they must protect than combining to provide global solutions? Perhaps none of us is willing to take the IT plunge and admit that what is required is a new start?
None of this is surprising. The amount of time (decades now) and money invested in the technologies currently in place is staggering. We are all working full out trying to protect this expenditure. To be frank it is hard to justify anything else, especially with bottom lines of all businesses and household squeezed so very hard at the moment.
Plus we must continue to work with the existing technologies in place elsewhere. We can’t arbitrarily use a different mail transfer system because everyone else is still using SMTP.
Changing this level of technology to a new secure-from-inception model requires the buying of everyone. Security and anti-malware companies included.
It would be nice to think that we can continue to grow our solutions such that they can cope with anything but the sheer size of most anti-malware shows you how difficult this is.
So what is this new Utopia?
Well I would like to throw in a penny or two, though I have no engineered solution…
- Closed First, Open Second Ingress
- Point to point Encryption
- Client Identification
- Service Identification Input
You have a service, you want to sell it, you want to promote it to as many people as possible and you want it to be easy for customers to find and use. Oh, and perhaps as secure as possible, as long as it doesn’t impact ease of use.
Basically, we should be insisting on closed and secure systems as our starting point. Banks are beginning to get to grips with this a little with customer applications that enforce the closed gateway. The idea I have here is that you need a lot of specific technology to even get into the service. The more sensitive the data/service the more secure the system should be. Unless there is a default level that everyone must adhere to that simply does the job. An example from Current technology might be a VPN into a DMZ to access data, such as client account or bank statements.
It makes sense for absolutely all communications that could be intercepted to be encrypted, from point of creation through to final destination. There are difficulties, especially for Law or Policy enforcement, but surely we should assume all our data is sensitive?
Well, we all know how this currently works; Username and Password. Simple and easily broken.
Some systems add a mechanism for secondary authentication, like Google and Visa but so many systems rely on the old Credentials system. Perhaps we as users need to push this harder. I know I hate remembering cryptic passwords but I try. At work we have an encrypted db of passwords. Risks therein of course but you’re substantially moving the goalposts with this kind of system. We are even moving towards AD for all user access, clearly identifying the individual and defining their access rights.
Telephone Banking uses Keyed Questions to try to confirm you are who you say. But aren’t these simply Keyed Passwords, and easier to guess ones?
No, we need a way to uniquely identify ourselves. Answers on a postcard please.
If we have to fully identify ourselves then it is a must that the service we are connecting to returns the gesture and provides us with something we in turn can trust that absolutely proves that we are talking to the intended audience. Two factor client authentication may be part of this. Perhaps the new temporary key can also include a first factor identification of the service itself?
Man-in-the-Middle is a tricky kind of attack but much more difficult if your communication was encrypted for the start.
Perhaps I’m being a bit innocent but these are ground-up tasks, none of which are adequately answered in the current environment.
Possibly the biggest issue would be preventing Man-in-the-Middle attacks. We need a way of knowing with absolute certainty that we are correctly and securely connected.
What all of this would leave, hopefully, is that only an insider, within the service prover, could ever hope to crack the security.
This happens now but isn’t necessary because we send clear text password across an unencrypted network to an unverified service.
We need IT Security to be as difficult to bypass as it is to rob a bank. Physically.
We need to let go of some of those easy of use hang-ups. Alas it should be difficult to transfer money from one account to another.
I can’t see any of this getting resolved this decade let alone this year. We simply need to start again. Web technology is too flawed to place secured technology on top.
A new Internet is required onto which to start building these new secured services.
Computer Weekly 27/01/2011
[ ]Google Security Blog 27/01/2011
[ ]Symantec 27/01/2011
[ ]Trend Micro
Alliance and Leicester (Rapport) 27/01/2011
Google Enterprise Blog (Two Phase Authentication) 27/01/2011
Verified by VIsa 27/01/2011
Wikipedia (Man in the Middle) 27/01/2011
Tech Republic (Inside Attack) 27/01/2011
Back to top
– Posted using BlogPress from my iPhone