Cloud service providers

Cloud Service Providers: Storage Solutions, SLAs, and Uptime Guarantees

June 1, 2017

Today’s world of data centers, IoT, social media, streaming and gaming is a strain on the storage market. Cloud service providers have to keep adding different types of storage technologies to keep up with the demand. Enterprises are learning to structure their data to help save room on their servers, but most of the data received from consumers is unstructured. This unstructured data takes up more space, so it is up to the providers and individual companies to find new ways to accept the data, structure it, and store it.

Many businesses are discovering the benefits of a pay-as-you-go approach to IT expenses. Working with a managed service provider is a cost-effective solution that can add to efficiency and more. But, is important to find out how your SLA (Service Level Agreement) is laid out. You need to know how your data is getting stored, and if uptimes are affected by the amount of data getting stored.

Unstructured Data

In order for unstructured data to be stored efficiently there are a few key elements that should be met:

  • Durability – Depending on the type of data it may legally, or through industry requirements, have to be stored forever. Data needs a way to be backed up on multiple servers to guarantee no data loss for the life of the company, and maybe beyond.
  • Availability –With the Internet of Things in full swing, clients need access to their data 24/7. Data accessibility is more than important, and sometimes the standard five nines (99.999%) is not good enough. Check your SLA and make sure the cloud service provider has a sturdy alternative for accessing your data. Many providers have a switch system with duplicate servers running at the same time. If one crashes, then the other one automatically kicks in. this may cost more, but depending on the urgency of your data access it may be well worth it.
  • Cost and Management – These are both always on the wish list. Low cost is critical to providers in order to keep competitive. The client wants easy access to their file structure at a moment’s notice. Virtual servers and new administrative monitoring solutions help with both of these problems. When you are talking to your provider make sure they have gone over the options with you, and they are willing to put it in writing. Remember the SLA is for your protection just as much as it is in place for the provider’s protection.


For the most part, there are three main types of storage; object, file, and block. Object storage is similar to what we see every time we open Netflix or some other streaming service, although it is not limited to that venue. It is particularly useful when there are several users accessing the same data at the same time. These large amounts of data are the most scalable and can easily grow or shrink as needed. File storage is what we see every time we get on our computers. It is a simple, but well-controlled structure.

Cloud providers and other large companies will use data centers to help manipulate large amounts of file structuring across networks. Block storage is the tightest of the storage family and is usually implemented with databases. It has equal sized blocks and allows limited amounts of storage in each block. This helps to maintain and separate data easily. It also helps to compress data as needed.

Don’t forget to download the white paper: Managed IT Services For Small Businesses. It provides additional information on this topic.

Software-Defined Storage (SDS)

SDS is a type of self-monitoring storage system that works with system failures instead of trying to avoid them. When a hard drive stores information it places the data in individual sectors. These sectors do have the possibility of becoming corrupt. A standard maintenance tool can find the bad sector and close it down. This means the hard drive will no longer use that sector and will store data elsewhere. The problem with this method is that the data already stored in the sector could have become corrupt. So, even though the computer now looks to place data in other locations, it does not help with data already corrupted or loss due to the failure. SDS does not try to find ways around the failure. It knows a failure will occur and it is designed to work around it from the get go.

A key feature of SDS is how it reroutes data. Data is stored on multiple servers across multiple data centers. The data is constantly being moved, and therefore replicated. The advantage to this is if a sector on a hard drive goes bad the SDS will look for the data, and when it can’t find it, it will pull it from its last location. This prevents data loss and keeps a type of backup existent at all times.

SDS is a self-monitoring system and is constantly running audits and other maintenance tools on the entire system. This helps the system know when a node is not operating at full efficiency, and it will reroute data clusters to other nodes. This creates resilience in the storage system and helps other monitoring tools determine if the hardware is going bad.

An outside controller is used to manage, audit, and optimize the storage system. The controller will move bits of data to smaller sectors and help to optimize the amount of space used on the server’s drives. The controller accepts every storage request and decides where the best place in the network that data should be stored.

The protection of data is always a key consideration. Buy the book ‘Easy Prey: How to Protect Your Business From Data Breach, Cybercrime and Employee Fraud’

SLAs and SLOs

The SLA, service level agreement, is the contract you sign with your cloud service provider. It basically defines what is expected of the provider, and what may or may not be the responsibility of the provider and the client. All the promises the cloud provider made you during your consultation should be in the SLA. This is the binding contract. Enterprise-level companies may have software monitoring tools in place to verify bandwidth and latency expectations outlined in their SLA. This way the company can hold the provider responsible if it is not receiving the promised benefits from the contract.

An SLO, or service-level objective, is essentially a set of metrics that define the SLA. In other words, the SLO is a testing mechanism to verify the cloud service provider is meeting its promised criteria at all times.


According to ACMQueue, “The availability component of an enterprise SLA can be technically challenging. For example, a business-critical application might not tolerate more than five minutes of downtime per year, conforming to an availability SLO of 99.999 percent (“5 nines”) uptime.”

Straight across the board this is not a problem, but enterprise-sized companies can have large applications, or business specific networking software, that does not promise the same high-level standard of uptime. This can create a problem because the SLO may show that the uptime percentages are not within acceptable parameters, although technically they are according to the SLA because it is not the provider’s fault. Cloud service providers have talked to development companies trying to get a better fault-detection system for providers to manage. This type of software monitoring tool will let the provider help the client by detecting possible problems in the client’s network before they occur.

There are a lot of options in data storage, and in how SLA’s are put together. If you feel an SLO is right for your company then you can purchase one, but it is another out-of-pocket expense. Just make sure the SLA covers all aspects of your network that you are concerned about. Don’t be afraid to ask questions, and ask some of your in-house professionals to look over the contract to make sure everything is covered as it should be. Get a free assessment and see how TOSS can help you with all your storage needs.


Let's Start a Conversation.

Connect with us and experience the TOSS difference.

Send this to a friend