Superb Internet Facebook Superb Internet Twitter Superb Internet Google Plus Superb Internet printerest Subscribe to Superb Internet's blog RSS feed

Cloud Fuels Disruption in Security Market

  • Cloud
  • Security

Cybersecurity

Cloud computing is having a major impact on all other areas of IT and delivering generally profound changes to the business world. Here’s a look at how the security field is evolving to embrace the cloud in 2016.

  • Malware Protection
  • Use of Firewalls
  • Load Balancing
  • Encrypting
  • Switching
  • App-Based Storage
  • Conclusion

The security industry is rapidly changing, with firewall and switching companies fading away to make room for solutions more directly geared toward the cloud. On the other hand, there are certain types of security firms that will continue to grow as the landscape shifts increasingly from physical to virtual.

Malware Protection

Anti-malware companies have expertise related to security, but their focus has traditionally been on in-house systems. Now that the cloud is becoming so central to computing, malicious parties are turning to those systems as points of entry for attack. In 2016, anti-malware firms will further invest in the development and introduction of cloud-specific tools.

The services that will be used are fundamentally similar since the basic idea is to check traffic for possible malware injections. One challenging aspect is interoperability, notes TechCrunch – “how the anti-malware solution gets inserted into a cloud system to which it doesn’t necessarily have access.”

This year, cloud infrastructure-as-a-service providers will be allowing people to use more malware options with their systems.

RELATED: As companies explore public cloud, they are realizing it’s important to look beyond brand recognition and price to the actual technologies and design principles that are used. In other words, what defines a strong IaaS service? At Superb Internet, we offer distributed rather than centralized storage (for no single point of failure) and InfiniBand rather than 10 GigE (for dozens of times lower latency).

Use of Firewalls

The unfortunate news for firewall providers is that their market is taking a huge hit with the emergence of cloud since access control is now being handled externally.

Firewalls determine the extent to which communication between certain systems is allowed and which protocols are acceptable. These systems have typically been IP- focused. Services such as packet monitoring and app awareness will still be needed, but access control is handled as part of the cloud service.

Load Balancing

Load balancing is a critical part of computing, but the companies specializing in this area will also become less prominent in 2016. Load balancing spreads workloads evenly across machines, a characteristic that is built into the cloud model and seen as one of its primary strengths.

Load balancing will still make sense with some legacy systems.

Encrypting

With traditional systems, encryption was an afterthought for many scenarios. In 2016, as the cloud blossoms, so will encryption – which now has a more pivotal role. However, adaptation to cloud is key.

“Traditional agent-based encryption is … hard to deploy because it doesn’t work seamlessly with data management and other infrastructure functions,” notes TechCrunch. “[E]ncryption vendors need to develop solutions that are massively scalable and truly transparent.”

Encryption will be built into many cloud systems in 2016. Independent encryption tools will also become more prevalent. Encryption could eventually become a more comprehensive strategy to safeguard networks via access control, alongside its role in shielding the data.

Encryption could gradually become the new “ground zero” for security.

Switching

Switching solutions are sophisticated tools, with capabilities such as establishment of a virtual local area network (VLAN). Typically switching systems designate which servers within a data center can and can’t interact. Within a network management context, switching becomes a much more elaborate undertaking.

With cloud, you no longer have to worry about network management in that sense. You can establish parameters through which switching occurs automatically. Network access control becomes a non-issue.

You do want to have switches so that you can have a single network supported by more than one infrastructure, but that’s not huge business. The business of switching will therefore be in decline in 2016.

The problems of switching companies are amplified by the challenge of cloud integration. “To get a so-called virtual switch inserted in a cloud-based data center, it would need to be tightly integrated with a cloud-based hypervisor,” says TechCrunch. “[There is] no incentive for cloud providers to give third-party switch vendors special access to their systems.”

App-Based Storage

Data is expanding astronomically, and the cloud gives enterprises someplace to immediately store all that extra information. That gives rise, in turn, to storage through applications.

The companies that will be the most successful with these cloud storage solutions are ones that will allow organizations to manage both public and private clouds.

The storage systems that will succeed the most will be ones that have encryption as a central component. Otherwise it will be necessary to encrypt through additional means, and that’s inconvenient.

Conclusion

There has been a lot of hype for the cloud in the last few years, but 2016 will be a year of massive change. As TechCrunch notes, “The transition of the enterprise from private to public clouds is likely to be the most impactful transition in the IT data center sector in the past three decades.”

Colocation? Or Keep Your Physical Machines Internal?

  • Colocation
  • Data Centers

Datacenter

Of course many businesses are moving to cloud hosting. However, there is another way that businesses are still using third parties for physical systems: colocation.

  • Considerations for Data Center Cost-Cutting
  • Should You Use Colocation?
  • Controlling Your IT Budget Externally

When you put together a spreadsheet with your data center budget, you really want to be skeptical of every component. You can often nix certain aspects to trim costs, and colocation will often make sense.

Considerations for Data Center Cost-Cutting

Cooling is a huge expense in data centers, and it is easier to control than ever before. You can probably raise the temperature a bit. You can also benefit from a free air cooling or adiabatic system long-term. You will have to pay upfront, but your power bill will drop, and often upkeep will be reduced too.

Is your data center getting bigger? Data center growth is questionable as cloud computing continues to be more heavily adopted as a primary system. The idea of a single app on a singer physical server is no longer reasonable. The old model was incredibly inefficient, with hardware grossly underutilized at 10% server and 30% storage capacity in many cases.

Cloud technology works even when you stay physical, explains Clive Longbottom of TechTarget. “Moving to a flexible, elastic multi-workload private cloud platform should drive server utilization rates up to a minimum of 60%,” he says.

Basically, there is no reason to physically expand when you can make much better use of your current machines through virtualization.

Your data center doesn’t need to be bigger. Does it need to exist?

Should You Use Colocation?

A data center is incredibly costly. First, it’s a piece of real estate. It was not just expensive to build but is expensive to maintain, since computing is so fundamental to business and requires built-in redundancies. People with technical backgrounds must be trained to manage the infrastructure. Plus, it isn’t an agile choice. Unlike cloud, a data center cannot be immediately dissolved when transitioning to a more virtualized system.

Consider colocation, argues Longbottom. “You retain full ownership and responsibility for the IT equipment, but the facility owner takes responsibility for all that grunt work,” he says.”[T]hey look after power to your equipment, ensure that there is enough cooling and maintain connectivity to the outside world.”

It’s also easier to handle physical security simply because there are so many customers and the costs of monitoring can be distributed throughout all parties.

While your own data center has set parameters, you should be able to easily adjust your space needs in a colocation environment. Now, that’s partially dependent on your provider. You want a hosting company that will make sure they are prepared for your potential growth.

It’s not necessary for your company to have expertise in data center management. In fact, you don’t even have to be focusing on skills in the maintenance of infrastructural hardware. When you take a “cloud and colocation” perspective, you want to focus more in terms of best completing tasks, integrating various elements, and delivering the strongest possible user experience. That’s the sea change of the third platform.

Controlling Your IT Budget Externally

It’s only the first step to decide if you want to try colocation. You want to be gearing yourself toward a diversified, loosely coupled infrastructure that takes advantage of the ecosystems created by third parties.

Public cloud is becoming a more central part of new development, notes Longbottom.

*Related* The key concern with public cloud is that you use a provider delivering the technology “as it was meant to be.” At Superb, we use distributed rather than centralized storage (the latter wrongly used by many cloud providers), so that even multiple node failures don’t affect performance or data. We also leverage InfiniBand for networking instead of 10 Gigabit Ethernet (10 GigE), because the former has dozens of times lower latency than the theoretical minimum of the latter. Get a Superb cloud VM.

You also want to explore software-as-a-service options. In those cases, you are allowing the outside tech company to handle the entire platform.

Along with using SaaS to handle some basic day-to-day tasks, you can manage the security of your virtual and physical systems through cloud security-as-a-service companies.

You can also used managed platforms for development and testing of your new applications.

Are you wanting to work with your own programs rather than do SaaS? It may still make sense to get a cloud server to run it.

Your budget for 2016 should be geared toward eventual removal of the data center entirely in the years to come. You want to be spending in ways that will allow an easier transition completely away from your own facility and toward environments that embrace agility as a core value, Longbottom remarks. “Invest in IT tools that enable greater workload portability,” he says, “in systems that enable ‘what if?’ scenario planning around end-to-end performance management in real time.”

Major 2016 Concerns of IT Chiefs

  • General

IT Person Typing

Chief information officers are being pulled in numerous directions at once. What are the most critical areas that should not be neglected in 2016?

  1. Keep increasing digital sophistication.
  2. Leverage cloud to enhance flexibility.
  3. Recruit for changing IT dynamics.
  4. Protect your own position.
  5. Reduce grunt-work time.

In the coming year, as always, CIOs have to figure out how to best use their time to help their businesses grow. It’s easy to get involved in challenges that are complex but don’t have a lot of value. To establish focus in the new year, here are five of the most important concerns for CIOs in 2016.

#1 – Keep increasing digital sophistication.

CIOs have to be keeping pace with technological developments because the digital aspect of business is becoming a more powerful revenue stream all the time, with Gartner predicting that the cash generated through digital means will rise from 16% to 37% through 2020.

The Gartner researchers note that a side-effect of the expansion of digital business is that CIOs now must meet the needs of more diverse populations, such as the company’s leadership, outside partners, and the organization’s clients.

The shift to cloud and diversification of infrastructure are ever-present elements of IT now and in the future, comments GLH CTO Chris Hewertson. “”The technology’s evolved since we’ve been doing our transformation,” he adds. “It’s not like we’ve got 10 systems – we’ve got hundreds.”

#2 – Leverage cloud to enhance flexibility.

The old model of information technology had parameters that were more firmly established. Hardware, data centers, and resources were more strictly delineated. Now, adaptation is easily accessible through the cloud.

While the third-party virtual computing offered through cloud services has become much more common in recent years, many think that the industry is fast approaching a period of exponential growth.

That’s backed up by the numbers. While legacy systems accounted for 81% of IT in 2014, they will only handle 65% of workloads by 2020, according to a Barclays poll of IT chiefs. Most of that computing is headed to the public cloud, which is forecast to account for 17% of enterprise needs in 2020, up from 5% in 2014.

The conversation about cloud has not yet reached the point of maturity, explains former ABN Amro CIO Geert Ensing. “On-demand technology will be adopted more and more,” he says. “Once you reach scale, external service provision can provide big benefits to the business.”

One key piece of that conversation is the quality of different cloud providers. For instance, a public cloud hosting service that deserves the business of enterprise should set itself apart by providing distributed rather than centralized storage, InfiniBand rather than 10 Gigabit Ethernet, and no overselling for guaranteed performance.

#3 – Recruit for changing IT dynamics.

Contracting with third parties is helpful for many businesses, but there are also times when it’s important to hire people to develop apps and perform other tasks. In fact almost a quarter of IT chiefs (22%) say that not having direct access to people with certain competencies is an obstacle to their progress.

The main areas in which CIOs feel their departments are deficient are analytics, big data, and IT administration. This is a continuing problem: those are the same categories cited in a 2011 Gartner report.

People who really excel at these areas are highly sought-after, which in turn drives them to switch workplaces frequently. The attrition rate in the technology sector averages 20%.

It’s critical to value these employees, notes Adam Thilthorpe of BCS. “You’d better start believing people are your best assets in the digital age,” he says. “[T]hey are fundamental to your success.”

#4 – Protect your own position.

IT leaders should also think about how they plan to adapt to the changing landscape. The roles of many CIOs are shifting rapidly as they start to take charge of new areas such as operations.

In a broad sense, there are three ways that tech chiefs are struggling to meet the demands of their role:

  1. Sway among the company’s top executives
  2. Hiring and retention of engaged, productive staff
  3. Ability to guide the company with a mind toward innovation.

#5 – Reduce grunt-work time.

There is a drive for the heads of IT departments to shift their focus directly toward development and to spend time brainstorming new apps, especially given the growth of cloud computing and the removal of related in-house infrastructural concerns. Unfortunately, more IT staff are employed than ever before.

Plus, there is a big difference in perspective toward the priorities of IT moving forward. IT leaders believe that the top concerns should be digital, analytics, and internal knowledge, says Hewertson. “At the same time, c-suite executives are expected to think beyond the present and to evaluate a broad range of leading edge technologies, such as robotics, artificial intelligence and the Internet of Things.”

Now, the CIOs are right that analytics are critical to the organization’s success, although the Internet of Things is part of that effort. Gartner argues that IT departments should be focusing themselves on algorithmic business – creating platforms and integrating the IoT strategies of sensors and wearables will become increasingly important.

Cloud as a Standard-Bearer of Service-Oriented Architecture

  • Cloud

Cloud Business

Cloud can be thought of as today’s version of the old tech notion of service-oriented architecture. Let’s look at SOA, its benefits, and how cloud fits into the picture.

  • What is Service-Oriented Architecture?
  • Thinking in Terms of Services
  • Benefits of Services
  • How is Cloud a Further Evolution of SOA?
  • How Cloud SOA is Different

What is Service-Oriented Architecture?

Service-oriented architecture (SOA) is an approach to building IT systems that embraces the service as the fundamental point of focus. Here are three basic parameters of an IT service:

  1. It is logically based on a task that occurs over and over again, with a standardized outcome (such as delivering geolocation data or organizing financial documents).
  2. It exists as a single entity.
  3. It could also include additional services.

Thinking in Terms of Services

Service is not an idea that originally comes from IT, of course. Instead, it is a straight business term. Look in any business directory and you will immediately notice how many different types of services are being offered.

For any type of service available on the market, the company, or provider, “is offering to do something … that will benefit other people or companies (the service consumers),” notes the Open Group. “The providers offer to contract with the consumers to do these things, so that the consumers know in advance what they will get for their money.”

This notion of a service was adopted by computer scientists to describe the tasks conducted by applications. A software service is just like any other service in that it meets the needs of its consumers and is backed by a provider. There is a stated or unstated understanding between provider and consumer that the software will consistently and reliably generate accurate results.

Benefits of Services

Here are some of the advantages of utilizing services:

  • Big, complex, stand-alone software is an information silo. It is shut off from external parties. In contrast, by using a system of coordinated services, there is better data exchange from one company to another. It’s also more affordable because integration of big programs becomes unnecessary.
  • Building your applications based on services simplifies the process of presenting it to the world. “This leads to increased visibility that can have business value,” says the Open Group, “as, for example, when a logistics company makes the tracking of shipments visible to its customers, increasing customer satisfaction.”
  • What an organization is able to do on a daily basis frequently relies on applications, and it isn’t easy to adjust huge software programs. That means it is challenging to make sure the operations of the business are adapting appropriately to meet the most recent regulations or to get the best chances for growth. Building the system based off of services allows the organization to be much more agile and much more consistently compliant with the law. The business can profit from this decision.

How is Cloud a Further Svolution of SOA?

It makes sense to set everything up as services, says David Linthicum of InfoWorld – and you can see why from the above benefits. Cloud computing is already set up as a form of service-oriented architecture, so essentially you are probably already experiencing the strengths of SOA even if you didn’t know it. Furthermore, if you’ve created apps with cloud components incorporated, you’re participating in an SOA model.

Related: It’s great that cloud has the positive features of service-oriented architecture. However, cloud technology is not all made alike. When you build your applications, you want cloud hosting that offers distributed rather than central storage (so there’s no single point of failure, among other things) and InfiniBand rather than 10 GigE, which offers tens of times lower latency. See our never-oversold cloud plans.

How Cloud SOA is Different

In what ways is the SOA of cloud different from the earlier model of a decade ago?

  1. Coupling

SOA is essentially a loose coupling of systems connected via a service, notes Linthicum. In the past, “Old SOA typically exposed services from one or two systems,” he says, “and the degree of coupling could be tight or loose, depending on the application.”

The cloud-focused SOA software that is prominent now must be coupled loosely in order to function. Thousands of services are often involved in a single program. Essentially, tight coupling would make the app much more vulnerable to stopping.

  1. Governance

The traditional version of SOA required governance, but a full service governance environment wasn’t necessary until you hit about 500 services. Basically more than 500 would be at about the level monitoring would start to make sense.

The cloud SOA programs require governance immediately, basically at 1 service rather than 500, argues Linthicum. “You cannot manage a set of applications that use remote and local cloud services without a sound service governance approach and technology in place,” he says. “That’s because cloud-based services are widely distributed, … so you must have much tighter checks.”

Interoperability: Key to Fast Cloud Growth

  • Cloud

Growth Chart

  • From Legacy to the Virtual Expanse
  • Speed as an Ultimate Priority
  • 2015 & Beyond

From Legacy to the Virtual Expanse

Earlier in the history of the Internet, companies created massive customized environments to store and access customer data, marketing copy, bookkeeping, best practices, and more. When these were developed, interoperability was not a concern.

Typically the software adopted by companies was about making life easier for the IT department (in the sense that they had control of those variables). User-friendliness wasn’t always prioritized, which led to people getting frustrated with IT.

That whole dynamic started to get upended about 10 years ago. That was when Web 2.0 apps started hitting browsers, followed by an explosion of mobile apps. These fundamentally user-focused applications were exactly what people wanted in business.

A short time following the rise of the app, cloud started making it possible for employees to access software immediately, which is how Shadow IT started to develop, explains Ron Miller of TechCrunch. “Instead of going to IT and asking for resources, a process that could take weeks,” he says, “users could sign onto a cloud service and provision software, servers, storage and even developer tools with a credit card.”

As these cloud systems started to mature, it became more and more clear that interoperability was needed, so developers specifically geared their work toward that end.

Related: Do you need ultra-fast, performance-guaranteed cloud hosting? At Superb Internet, we use distributed storage rather than centralized storage, which means even multiple node failures don’t hurt performance. Plus, we use InfiniBand rather than 10 Gigabit Ethernet, which means the latency is dozens of times lower. Learn more.

Speed as an Ultimate Priority

In addition to the shifts toward mobile and cloud seen above, organizations also started to understand how critically important speed and agility were becoming to business. The old systems weren’t always fast enough and couldn’t be adapted as quickly as users wanted.

Cloud and mobile were able to meet that need better than anything else. They allowed developers to introduce complex capabilities within applications more quickly than was possible before.

It was no longer considered the smartest move to build excessively complex and rigid systems. Businesses started to realize they had to coordinate with one another in order to each be able to move forward.

Simultaneously, smart phones and tablets were becoming a lot more popular, notes Miller. “That meant that companies needed to provide apps for their employees to work beyond their cubicles wherever they happened to be,” he says. “Customers were also demanding better tools and easier ways to interact with a vendor.”

Customers were starting to make purchases based off of compatibility, so software and system designers had to become compatible if they wanted to remain relevant. It wasn’t considered appropriate anymore to create rigid environments that were completely partitioned from outside parties. Organizations had to learn to collaborate.

2015 & Beyond

In 2015, cloud continued to become a trusted standard, with General Electric even announcing that it was building nine out of every 10 new applications in the public cloud. More and more thought leaders believe that cloud is fast becoming the way computing will be done and that corporations will gradually get rid of their data centers.

One of those people is Fredric Paul of Network World, who comments that one of the biggest signs of the new dominance of cloud came when the large gaming enterprise Zynga pulled out of its plans to build a data center to move forward with the public cloud in May. “Basically, the company ripped and replaced a $100 million data center investment in favor of the cloud,” says Paul. “That’s pretty darn dramatic.”

A couple months later in July, IDC released data showing that one out of every three IT infrastructure dollars went toward cloud. The research company also noted that it expected cloud spending to rise, accounting for half of infrastructure budgets by 2019.

Last year HP did leave the public cloud market, but that’s probably because they were having difficulty competing. In November 2015, UK financial companies were assured by the Financial Conduct Authority that public cloud was a sound choice for finance.

Market forecasts are definitely positive. Even so, it’s possible that projections are grossly underestimating what’s going to happen, Paul argues. “It won’t be long—maybe not in 2016, but soon—before any computing project that doesn’t happen in the cloud will have to … [justify] what will be seen as an old-fashioned and inefficient approach.”

What is essentially happening is that companies are realizing computer systems fall into the same general category as electrical power. Do you really want to have to deal with your own power generation? Maybe the infrastructure should be external as well. More and more businesses are deciding that’s the direction they will take, and the speed with which that transition will occur could be blistering.

Throughout this process, interoperability will remain fundamental.

Top 5 Dedicated Server Mistakes to Avoid

  • Hardware

Dedicated Server 2

Dedicated servers can be costly. How can you get the most value out of one? The first step is to avoid these common pitfalls.

  • What is a Dedicated Server?
  • Getting the Most Value
  • Error #1 – Poor Cost Management
  • Error #2 – Lack of Attention to Authorizations
  • Error #3 – General Security Neglect
  • Error #4 – Failure to Test
  • Error #5 – Excessive Concern with the Server Itself
  • Conclusion

Cloud computing has been growing incredibly over the last few years, but many companies still choose to adopt dedicated servers (whether run in-house or at a hosting service’s data center) instead. Many businesses are attracted to the fact that they have total control of the machine and that their system is operating through a distinct physical piece of hardware.

What is a Dedicated Server?

Think you know what a dedicated server is? It’s a single server used by one company, right? Actually, the meaning is a bit different depending on context.

Within any network – such as the network of a corporation – a dedicated server is one computer that serves the network. An example is a computer that manages communications between other network devices or that manages printer queues, advises Vangie Beal of Webopedia. “Note, however, that not all servers are dedicated,” she says. “In some networks, it is possible for a computer to act as a server and perform other functions as well.”

When the context is a hosting service, though, a dedicated server refers to the designation of a server for one client’s sole use. Essentially you get the rental of the machine itself, which typically also includes a Web connection and basic software. A server may also be called dedicated in this sense outside of a hosting company, to differentiate between a standalone server and cloud or other hosting models.

Getting the Most Value

While getting a dedicated server seems to be a solid decision theoretically, it is often cost-prohibitive to go that route. Along with the generally improved performance of cloud VM’s, they are also significantly more affordable than dedicated servers are. If you do decide to implement a dedicated infrastructure, you don’t want to make any mistakes that could diminish the value of your investment.

Dedicated hosting is naturally more sophisticated than other types of hosting such as shared, VPS, or cloud. It’s a great idea to be more proficient with your IT skills if you choose a dedicated setup. Otherwise, it’s easy to make errors – errors that can sometimes become incredibly expensive.

Here are five top mistakes made by businesses with dedicated systems, so you can avoid running into issues with your own server.

Related: Interested in exploring the most affordable dedicated servers? At Superb Internet, you can be sure you get the best possible deal with our Price Match Guarantee. Plus, if you ever have issues with our network, or if we otherwise don’t hold up our end of the bargain,  we will give you a 100% free month of service. Explore our dedicated plans.

Error #1 – Poor Cost Management

The problem that you will run into with dedicated solutions is the money, notes Rachel Gillevet of Web Host Industry Review. “Although there are no hidden costs or setup fees associated with most dedicated hosting plans,” she says, “many organizations tend to underestimate the amount of money they’ll need to expend on IT or – in the case of unmanaged hosting – maintenance costs.”

Since every organization wants to reduce the size of its tech budget as much as possible, you want to fully explore cost before deciding on a dedicated system. What is the total cost of ownership (TCO)?

Error #2 – Lack of Attention to Authorizations

When you actually have access to the dedicated server, it’s time to check three simple tasks off your list:

  • Create a sophisticated, hard-to-crack password
  • Disable root access
  • Make sure that only a specific category of users has the ability to add, remove, or modify back-end files.

Those three tasks may sound very rudimentary to many tech professionals. However, skipping them is a common mistake for people who have never used dedicated hosting and aren’t aware of the need for care with logins and permissions.

Error #3 – General Security Neglect

A strong host will make sure that security safeguards are in place, and they should be able to prove it – with certifications for standards such as SSAE 16, ISO 27001:2013, and ISO 9001:2008 (all three of which are held by Superb).

Although it is good to check that hosting services are following industry standards, it’s also important to realize dedicated servers require more of a focus on security from you as the client. Specifically, you need to manage your own security applications and keep an eye on traffic to verify that breaches aren’t occurring.

Error #4 – Failure to Test

If you are still getting to know how your server works, it’s too early to bring it safely online, explains Gillevet. “Make sure you know how to properly use everything,” she says, “and learn the best practices for monitoring and security.”

In other words, you want to be prepared rather than trying to pick everything up “on the fly.”

Error #5 – Excessive concern with the server itself

A common area of oversight is focusing too much on the hardware. It’s incredibly important that the hosting service’s network is capable of delivering strong performance continually. Since that’s the case, you want to go beyond looking at the capabilities of a dedicated server when you choose a host. You want a strong network, with multiple built-in redundancies so that your system is reliable and properly protected from an isolated failure taking down your entire environment.

Conclusion

It’s easy to make mistakes with a dedicated server, especially if it’s a new form of hosting for you. If you do work with a hosting service, make sure you choose one that cares about its customers.

“I would not even consider another web hosting company as my experiences with you are always so positive,” says Superb client Diane Secor.

Image via Wikimedia user Victorgrigas