Filed under: Uncategorized
This article was publish on the GOTO Conference Magazine 2011 Aarhus Denmark
Cloud computing is an evolution of an innovative computing technology from the 50s and 60s of last century until major IT technologies (such as servers, operating systems, databases, application servers, etc.) were commoditised, i.e. turned into reversible products that have no significant differentiation and compete only on price.
In one cloud computing model, computing infrastructure (computing power, memory, and storage), applications, or platforms for application development, are offered for consumption and can be used as a service based on payment-according-to-use terms.
In addition, the service is not limited, allowing for some elasticity in the resources provided in that they can be increased or decreased as needed (on-demand). Unlike the old methods of hosting, this model allows the consumer control over the resources through a Web interface or using APIs.
The technological changes inherent in cloud computing enable a service-based economy, which paves the way for business developments and agility implementation within the organisation and its connection with other organisations.
However, there are significant risks in the cloud: for example security, compliance, integration and regulation risks, which might necessitate additional investments
In this paper we will present:
- a security analysis for Dropbox as an example of cloud computing risks, and
- Our encryption applications for Dropbox as an example of our security strategy for the cloud.
Let us take a look at one of the most popular cloud storage solutions called Dropbox. This service allows you to store your data in the cloud using any of your devices with internet connection, from desktops and laptops to tablets and smartphones. Dropbox has created easy-to-use applications targeting different devices and operating systems, which makes it one of cloud computing’s most popular services with 10 million users. More than 100 billion files were stored as of May 2011 , and Dropbox saves 1 million files every 5 minutes.
When you start using Dropbox, you will need to register to the service and create an account by choosing username and password, which will generate a unique Host ID. The Host ID will be stored in all of the devices that access Dropbox with this account. A file folder will be created on your computer, which will be monitored by the Dropbox application. Whenever you change the contents of this folder, by adding, modifying or deleting files, the Dropbox servers automatically synchronise these changes with your account’s folder. There’s nothing earthshaking about this capability, but the whole process is amazingly simple and userfriendly, and private users can get up to 2 GB storage for free.
In order to save storage space and data traffic, every file is split into smaller files of up to 4 megabytes in size. When a user tries to upload a file to his Dropbox folder, the local application on his device calculates the file hash (using a SHA-256 algorithm) and sends it to the Dropbox servers, which will compare it with all the file’s hash in their database. If the file exists, Dropbox will add another link to this file that will associate it with this user account and save the need to upload it again.
Dropbox security and privacy issues
In their paper called “Dark Clouds on the Horizon”, the SBA Research Institute  has reviewed three weaknesses in this mechanism. Those weaknesses were caused by not verifying the hash result of the files and the weakness in the Host ID authentication.
The Host ID is the only parameter that is used for authenticating users and devices. This means that any disclosure of the Host ID will allow an attacker to get access to all the user data in Dropbox. The Host ID is common to all the devices in the specific account and never changes.
These weakness issues have been discussed by information security blogger Derek Newton  who also reveals that changing the Dropbox account password would not change the Host ID. So changing password becomes useless.
SBA research  also pointed out the possibility of Amazon staff (Amazon is the infrastructure provider that runs the System for Dropbox) getting access to the decryption keys of Dropbox users’ secure connection (SSL) and to Dropbox databases (stored on Amazon storage service S3).
This problem also demonstrates one of the main security issues in cloud computing, namely the third-party service provider. Many of the service providers are cloud customers themselves, which makes it complicated for end users to keep track of the data security controls that secure their data in the cloud.
Those weaknesses could lead to different attacks from the obvious hidden channel or more sophisticated attacks such as uploading malicious files to the user’s Dropbox folder and let the victim unsuspectingly spread the attack to his different devices.
Another attack that was suggested in the SBA research paper  is called Online Slack Space attack. This enables an attacker to hide his data in the victim’s Dropbox folder and get access to the victim’s data without it being associated with him. This kind of attack also gives the attacker access to free and unlimited storage.
Malicious attacks are not the only problems when you are using a cloud service like Dropbox.
Dropbox demonstrated how dangerous it could be during one of their code updates. Dropbox introduced a bug affecting their authentication mechanism . While conducting this code update during four minutes, any user could have logged into an account without the correct password, meaning that he could get access to any account and any data that was stored on Dropbox servers.
A security problem that occurs while a code update is being conducted could happen on any system, but when it happens on a cloud service, it exposes the users to a larger risk due to the multi-tenant characteristics of the cloud.
Another disturbing issue called “Patriot Act” which relates to the fact that any American company must disclose any data stored in their system to the U.S authorities upon request. Dropbox increase the users’ concerns about those issues while changing their “Term Of Use” (TOS) several times .
The Alexandra Institute’s encryption solution for Dropbox
To meet some of the threats presented in this paper, the Alexandra Institute has created a solution that allows users to independently encrypt their files before uploading them to Dropbox servers. You can find many solutions that support encryption for Dropbox , but what makes this solution unique is that it has been designed to fully support Dropbox service functionality.
The solution – which is currently only a proof of concept, contains two applications: one for the computer and another for the smartphone. The solution allows users to encrypt the files before uploading them to Dropbox servers.
With these applications, we enable the user to create his own independent encryption that he can trust, thus making him less dependent on the service provider to secure his data.
How does it work?
First you need to install the applications both on your computer (desktop or laptop) and on your smartphone. Then you need to set your Dropbox user account (Host ID) information into our applications.
The next step is to generate your private key that will be used to encrypt and decrypt the files (AES 128b symmetric encryption scheme). The key will be sent to the smartphone by using a QR code (Quick Response code), which is a visual image of the key displayed on our computer screen. We take a picture of the image by using our smartphones camera, and the application will generate the private key for our smartphone application. Now we can encrypt and decrypt files with our smartphone on our Dropbox service (we are generating the key only on the computer due to security and performance issues).
In this process we encrypt not only the files but also the file names. In case an intruder has succeeded in getting access to our account (got our Host ID), he cannot view our files or their names. This functionality is obtained by using XML files that hold the file names and generate them when the user provides the correct encryption key.
Our proof of concept is a Linux laptop application using the Python programming language and an application for the Android smartphone using Java. This flexibility is gained by using standard encryption algorithms and tools.
Given that Dropbox provides the API’s used in the proof of concept for a large range of operating systems, building a more full-fledged solution is “straightforward”.
In this paper we try to demonstrate the risks of using cloud computing services by discussing the vulnerabilities and security issues of the popular cloud storage service Dropbox, as this concerns many of the cloud users and their potential customers. We show a simple and easy-to-use security solution for Dropbox users that allows them to independently encrypt files before uploading them to Dropbox servers. Our solution does not solve all the security issues presented in this paper. But it demonstrates our vision on securing the cloud by creating trust between provider and users through transparency and tools that give the users a sense of control and governance that otherwise seem to be lost in the cloud.
 At Dropbox, Over 100 Billion Files Served–And Counting, retrieved May 23rd, 2011.
Online at http://gigaom.com/2011/05/23/at-dropbox-over-100-billionfiles-served-and-counting/.
 Dark Clouds on the Horizon: Using Cloud Storage as Attack Vector and Online Slack Space by SBA Research
 http://blog.dropbox.com/?p=821 The Dropbox blog
 COX, M., ENGELSCHALL, R., HENSON, S., LAURIE, B., YOUNG, E., AND HUDSON, T. Openssl, 2001.
 NCrypto Homepage, retrieved June 1st, 2011. Online at http://ncrypto.sourceforge.net/.
Many businesses consider migrating part of their systems or services to the cloud. This is not surprising when you considering the huge potential that cloud computing could offers us.
Technology must step up a gear to keep pace with business change, or else risk losing out to competitors. That was the key message delivered by Bill McCracken, chief executive of CA Technologies, during his opening keynote speech at the company’s CA World 2011 conference. Illustrating the negative effects of not being able, or willing, to change, McCracken said that in 1985, 396 of the Fortune 500 were destroyed, merged or acquired.
Like with any other IT project, there is a risk involved in migrating to the cloud, but the risks in cloud are wider because of the nature and the characteristics of the cloud.
As I mentioned on the previous post, it is very important to go out there and get your hands dirty (as much as you can get dirty in the cloud).
Get a feeling about how those systems work and how different they are from your current environment.
Once you have done that, you have an idea about what cloud computing is all about (or you hire someone that does) and you can start creating a cloud based project.
Here are some steps for creating a successful cloud migration project.
Step one: Identify the business opportunity
The first step is to identify the best business opportunity that the cloud could offer to you organisation.
The cloud can offer different kinds of benefits for your company from Operational efficiency through Revenue growth and even into a Business transformation that would fully change the way your business is operating. It’s important to fully understand the business potential of the cloud and the different business scenarios that are being offered by the cloud technology in order to identify the right business opportunity for your organisation. Learn more about the cloud benefits here.
To help you identify the potential of cloud computing for your organisation, you should try answering the following questions:
- Are you focusing on your core business or are you wasting too much time and money on operational missions?
- Can you improve your financial performance by increasing your cash flow or reducing your IT risks?
- Do you have a clear IT budget? Can you easy plan your investment in your IT operational?
- While producing new products, how much time do you spend from requirement to time to market?
- How fast can your IT systems etc. react to change?
- What do your costumers say about your service?
- What are the main expenses in your IT budget?
- Are there any services that you are running on your internal system that you can externalise to your costumers?
- What are your competitors or similar businesses doing in the cloud?
Answering those questions should help you to identify the business potential of using cloud computing. Without identifying a strong business opportunity, it wouldn’t make any sense to get into a cloud adopting project.
This process is the key step for creating a successful cloud adopting project and it requires deep understanding of your business and IT systems on the one hand and a deep and updated understanding of the cloud potential on the other. At this stage you will need to involve both your business and IT professionals and cloud experts. The result of this stage should be a list of functional request from the cloud service provider (CSP).
The cloud is constantly evolving , more and more services are coming out, and more and more companies are adopting them. You should keep in mind that even if you cannot identify a current opportunity, you should keep looking for one and run this process of asking questions. If you do not keep “checking the cloud poles”, you might find yourself lagging behind your competitors and clients.
Step 2: Understand your current environment and IT needs
Now you have identified a very good case and decide to start a cloud migrating project. You will need to map the relevant assets that you are planning on moving into the cloud. You will need to figure out what are the needs for running those assets on the cloud as well.
This requires a deep inspection and mapping of your environment, systems and applications. You would expect any company to have a fully detailed map of their systems and application. Surprisingly, the reality is that a lot of companies do not have a detailed and updated mapping of their systems. The mapping should include full hardware and software architecture including the specific requirements for the system. This stage is important for assessing how many resources you might need to invest in the migration.
Step 3: Explore the relevant service providers
In this step we are creating the relevant database that contains all the cloud providers that can deliver the services that we are looking for. It would be wise to run an RFI (Requests For Information) process. The RFI should be delivered to all the relevant cloud providers with the aim of asking them how their service offer can address our requirements list that we created on Step 1.
We need to make sure that we have well defined requests and we can priorities those requests into different levels such as mandatory requests, important requests and nice to have, or as some would like to define it: as an advantage for the offers. The more specific and accurate those definitions are, the better we could track the best service provider. We need to remember that a very high requirement level could leave us with only a few candidates, which will lower our bargaining power against the cloud service providers.
Another tool that could be helpful in creating the relevant CSP’s list is the Cloud taxonomy by the OpenCrowd that contains a list of cloud service providers divided into different categories and services.
Step 4: Risk assessment
At this point we need to start thinking about the risks involved while conducting a cloud project. The methodological way to do so is by conducting a risk assessment of the project.
So this is what we have got until this point: we have a great idea about how to create better business while using the cloud, a full mapping of our systems and applications,, and a list of relevant CSP’s that can provide us with the cloud services that we desire.
Through the risk assessment we need to conduct the following processes:
- Fully understand the compliance requirement over our assets. Compliance probably would be the major factor that will affect the chosen solution. Any solution that would not address the compliance demands would not be a valid one.
- Even if it seems that we cannot address the current compliance request, we need to keep in mind that the compliance probably would be adjusted with the current leading technology, which means the cloud.
- By joining forces with the cloud providers, it is very reasonable that you could find a solution that would allow you to adopt the cloud solution and keep the compliance demands.
- Mapping the risks on each asset that we consider moving into the cloud.
- A great tool for this process is the Enisa Cloud Computing Security Risk Assessment document that contains a detailed analysis of the cloud risks, their probability and impact on the organisational assets.
- Mitigation plan
- Facing each risk we need to introduce all mitigation options that could be taken to reduce the risk.
- It is very important to make an estimate of the costs and, if possible, to produce some alternative mitigations.
- Finally and most importantly is to present the risks that remain after the execution of the mitigation plan.
- Requirements and key factor
- Create the list of requirements as the result of the risk mitigation plan that will be part of the project RFP (Request For Proposals) document.
- Determine key factors and the specific characteristics of this project.
- Project risk
- During the risk assessment we also need to map the project risks that are not security related.
This step is important in order to ensure that both the organisation management and the project management are familiar with the risks and accept them as part of the risk mitigation plan for the project.
Step 5: Request for proposal (RFP)
Now the picture is quite clear: We have determined the project goals, we know the risks, the key players and the service providers. We are now ready to write the RFP document that we will send to all of the relevant service providers that address our RFI document and other relevant CSP’s.
As part of the RFP document we need to create a metric that would help us to evaluate the different proposals. This metric should be created based on our risks assessment and the issues that appear to be important or even critical while we defined our project requests in step no. 3.
This is the most crucial step. The cooperation between the IT professionals that need to point out the technical key issues and the lawyers that need to translate those requirements into a legal document should be at the highest level. Any demand that does not appear in the RFP would probably not be implemented in the project.
Step 6: Running the project
After all the hard work, we have now chosen a cloud provider that has given us the best offer. It is time to sign a contract with him and start the project.
As part of the preparations for the project, it is very important to clearly define the borders of responsibility between the CSP and your organisation.
We need to create the success metrics for the project and an auditing point as part of our contract to ensure that the cloud provider is committed to our project goals.
While working on the project we need to determine how to keep and run our business and the point when we are going to transfer our business to the cloud. We need to make sure that all our data is backed up and would not be lost at any time during the project.
It would be wise to run the system in the cloud in a test mode with a “friendly client” (for example our business client) and make sure that we have got all the functionality that we expected.
Step 7: Evaluate the project according to the success metric
After we have finished the project, we need to start evaluating it.
The first thing we need to do is to make sure that the cloud provider has delivered all the requested content for the project. We need to make sure that all the functionality of our systems has been maintained.
Another reason why the evaluation step is important for ensuring a successful cloud migration is that we have just finished our first project but probably it would not be the last one. So in order to improve from one project to the next, we need to make sure that we gather relevant insight from the current project. We need to make sure that all of these insights are implemented in our next project.
The cloud can give some great advantages to our business and organisation but migrating to the cloud requires investment of time and money. We will need to make sure that it is done in a responsible manner and that both the organisation management and the cloud project management understand and accept the risks and the mitigation that can address those risks.
It is strongly advised to start your cloud adopting project with a low risk project. On the other hand, we still want the project to be significant enough to give the organisation a sense of the huge potential that cloud computing is holding.
Adopting cloud services could be a complicate matter, but by joining forces between cloud providers and cloud customers and between organisation management and the IT professionals, we can reduce the risks and create a successful and useful project that offers the organisation a meaningful advantage that helps to increase productivity, reduces costs and improves the services that the organisation offers its customers.
Filed under: Cloud Computing | Tags: anvendt kryptografi, arrangement, cloud, foredrag, risks, security, SMC
We are participating at the yearly developer conference GOTO in Aarhus next week. We will be at booth #12, where we will present some of our projects and products, including our new course on Cloud Security (see alexandra.dk/ccsk). In fact, we will run a lowest unique bid auction where all GOTO-participants can bid, and the winner will get a free CCSK-course!
In addition to the booth and auction, we will be present with to presentations. One in the regular program, and one in the Solutions Track (sponsored talk):
- Monday Oct 10, in the regular program, Jan Neerbek will speak about “Apriori data mining in the cloud”.
- Tuesday Oct 11, in the Solutions Track, I will speak about “Cloud Security or: How I Learned to Stop Worrying and Love the Cloud”.
The latter is one of the four security talks mentioned by Lars Sommer in his blog on Version2. It should be added though that CloudCamp organises an unconference on Tuesday Oct 11, where Janus Dam Nielsen will give a lightning talk on “The Golden Age of Cryptography”. So Lars, if you are in Aarhus anyway, there will be a bit more on security :).
Filed under: Cloud Computing, Privacy | Tags: APT, cloud, Lockheed Martin, risks, RSA, security, sikker kommunikation
Advanced persistent threat (APT) refers to a group, such as a foreign nation, state or government with both the capability and the intent to persistently and effectively target a specific entity. The latest example of this kind of attack was the attack on RSA systems, which was just the first step towards gathering the keys to RSA’s SecurID tokens that helped the attacker to get to the aerospace company Lockheed Martin System and two major US defense contractors: L-3 Communications and Northrop Grumman.
Those attacks came after the public had been exposed to highly sophisticated attacks like the Stuxnet malware which was claimed to be targeted against Iranian nuclear plants, and operation aurora attacks against Google, Adobe, Juniper and more.
What happened? Where did all those attacks come from? Had World War III started and they just forgot to announce it?
Although nations have been using digital espionage for many years, there used to be a clear border that no one dared to cross. The focus was on military or diplomatic targets while the civil systems were kept out of the game. But in the last couple of years – and maybe a little before then – some APTs have started to exploit the internet to achieve economic and strategic aims. Since then the border has started to get blurred.
The main difference between APTs and the common attack is that APTs are mainly focused on specific targets and have well-defined goals, while the creators of the “common” virus do not attack a specific target. Another difference is the almost unlimited resources that APTs have at their disposal to support a cyber-attack, such as computer engineers, money and range of intelligence collecting capabilities like signal intelligence, human intelligence and visual intelligence..
Let us take a look at the RSA/Lockheed Martin attack and let us assume that those APTs (The Americans are blaming China) were looking for information about the next American combat aircraft (The F-35 is manufactured by Lockheed Martin). On their way to achieving this objective stands a security company like RSA, and the chances that the RSA security teams and systems could withstand this attack are very low.
Although RSA is one of the leading security enterprises, and they have great security systems to protect their network. An indication of that is the fact that RSA could identify the attack and block it afterwards. But unfortunately that was not enough.
Why could a security leader like RSA not face that kind of attack? And if RSA failed, what does this say about the ability of other enterprises to bravely survive attacks from APTs?
The main reason that commercial company networks cannot face those attacks on time is simple: They never considered APTs as one of their rivals in their threat model, or they underestimated the capabilities of the APTs.
This is a very typical error in security, important threats (eg. unforeseen timing attacks on SSL, side-channel attacks and many others) left out just because it is complicated to fully figure them out or expensive to deal with them.
If APTs are not in the threat model as one of the potential rivals , the chances of dealing with those attacks do not exist.
On the other hand, we can estimate that Lockheed Martin succeeded in blocking the attack (based on Lockheed Martin reports) , and the fact that The Department of Homeland Security and the Defense Department offered to help tells us that Lockheed Martin can get very significant reinforcement.
Another thing is clear when you are responsible for developing the future US attack aircraft: You must include rivals with APT capabilities in your threat model.
While APTs attacks are becoming the fate of a growing number of enterprises, the IT industry will need to create a new “defense doctrine” that addresses the new types of threats.
But the IT industry cannot do it all alone. They need governmental support. The Americans have already established the US Cyber Command, and the British army is right behind them. Soon each western country would need to address this issue and create the proper governmental bodies to head the national cyber policy.
Filed under: Cloud Computing
Intel just launched a new service called Intel AppUp Small Business Service, which is a part of Intel’s Hybrid cloud services.
What is that, the hybrid cloud that Intel offers? It is a server, placed in the costumer’s office but managed buy a Managed Service Provider (MSP) as a cloud service. Intel leases all the hardware on a three-year term to the MSP, and they resell or release it to the end users.
The usage of server’s is monitored by the MSP for billing, and the custumer is charged on a pay-per-use basis. Software is provided by Intel, the MSP or the end user. If the end user needs new software or a service, the MSP can roll out their own VMs and add those to the mix. The end users can roll out their own software and just install it. The box is placed at the end user’s premises and can have whatever hardware or software the user wants installed on it.
Most of the things that Intel is offering are not new but Intel supplies a nice package and tags it with a fancy cloud name (although some suggest that it is more private than hybrid and that it is not really a cloud). The result is a new great offering that will help small and medium businesses to easily migrate to the cloud.
Who can benefit from this service?
If you have a small business and your system contains sensitive data subject to regulation, this service can help you to maintain a high level of security and meet regulation needs while at the same time gain the cloud benefits like reducing IT costs, focus on core and get the agility that is so important for SMBs.
In this post, Derrick Harris from GigaOM, argues that regulators should take great care not to stand in the way of business innovations based on “big data”. In his conclusion he writes.
Don’t get me wrong, consumers deserve more information and the federal government is right to attempt to give it to them, but everyone needs to get educated on the connection between data collection and usage and the benefits they provide.
I think he is right — education is needed, but the question is who needs to be educated. To be honest I think Harris is maybe a bit short-sighted and misses a big innovation opportunity, namely building systems that will allow for all big data innovations while preserving the security and privacy of citizens.
This is exactly the goal of projects such as ABC4Trust and other work that we participate in. People working with modern cryptography have a great responsibility for communicating the technological possibilities. What is fiction, what is fact? What is feasible, what is not? What is a fact and what is feasible, is building solutions which will allow for a lot of data mining whilst still preserving the privacy of users. Technologies such as Microsoft’s U-Prove and IBM’s IdentityMixer are ready for adoption, and policy makers are looking into it.
I firmly believe that strong privacy regulations will in fact foster rather than hamper innovation.
Amazon Web Services (AWS) had a huge outage, the headlines said. And this outage took down a lot of companies that use Amazon as their cloud provider.
Amazon outage is a significant incident because of the fact that cloud computing becoming a billions of dollars industry, and an event like this had a huge impacts on the business called cloud.
So there was no surprise when the big discussion was open on the blog scene.
It’s not Amazon’s fault that called some “cloud defenders” in their posts. The customers had a really poor architecture that took them out of business for those long hours. You have to pay for having been stupid and not preparing yourself for those kinds of incidents.
You cannot blame the customers for your resounding failure. Some “independent public defenders” wrote back blaming Amazon that didn’t keep her business liabilities to the paying customers.
Oh yes we did, Amazon replay, for the specific service we did, and the debate continued.
I’m not sure who is the villain in this story, and I’m not sure if it’s important but every incident in such magnitude has to teach us a lesson…
One thing is clear: None of the companies that suffered during the outage will take their business away from the cloud. Maybe some angry customers will switch cloud provider but that’s all. We all have to remember that those companies did an excellent business and most of them could not do it without relying on the cloud.
So what will they do? They will scrutinise themselves and try to figure out the causes that made them out of business for such a long time, and if we’re lucky some of them will write about it and we can all learn from their experience.
But not only the customers will scrutinise themselves. Amazon will probably check themselves thoroughly and should hopefully come to some useful conclusions on how to make their services better (I’m not sure that they will go public with that – they don’t want to encourage a potential lawsuit) but all of their clients would benefit from it because that’s how it works in the cloud.
But not only would the immediate players perform a general check-up; most major cloud provider will probably do the same. The first question that each cloud provider CEO asked himself when he heard about this incident was: Could it happen to me, and then (I would like to believe) he would turn to his staff and ask them to make a full and extensive check-up.
So we should almost thank Amazon for this incident because it will certainly make the cloud a better and safer place than it used to be.
But why do we need such incidents to happen before we initiate what obviously needs to be done? Because people need to see the damage with their own eyes before realising what they already knew. It happened to the aviation industry after 9/11, it happened to the US Army after the “wikileaks” incident, and it will keep happening again and again, because it is easier to respond when the damage has already occurred, you don’t need to convince nobody that the treats are real, no need to do risk management or fighting for more budget.
You have to learn from the experiences of others and especially from major incidents, but you don’t have to wait for an incident before you make improvements.
What IT and security guys need to do is to test their system and in a cloud context it means to test the cloud provider service.
How do you do it?
- Perform a risk assessment of your system and assets and find out what is the consequence of each risk that materialised.
- Understand the risk and determine the level of risk that is acceptable for each asset.
- Demand transparency from your cloud provider and make sure it appears in your contract.
- Get to know your cloud service provider
- The architecture that he uses
- The weak side in the provider’s systems. For example the weaknesses of the EBS were well analysed by Adrian Cockcroft’s at his Blog.
- Make sure that you can perform vulnerability assessments, penetration test, audit logs and activity monitoring of the cloud provider system.
- Check that your cloud providers comply with regulations.
- Check the alternatives. Can you find providers that better meet your demands?
And after you have checked all this, you need to perform a stress test of your system:
- Test it in extreme situations (peak scenarios).
- Test it for vulnerabilities and do penetration tests.
- Audit the components in the cloud that influence on your system.
- Test the business continuity and disaster recovery plan.
Cloud computing is already changing the face of the IT industry. But in order to make the most of it and stay alive, you’ll need to be alert and ready for all the new challenges that it presents.