888.4.NATNET
Facebook Twitter LinkIn Blog Instagram YouTube Google Plus
29
Oct
2012

Big Data in Play with Hurricane Sandy

by Administrator
Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
21
Apr
2011

Press Release: NationalNet Makes a Statement on Standards, Begins SSAE 16 Attestation

by Administrator

SSAE 16 LogoATLANTA, GA – April 19, 2011 —   NationalNet, a web hosting and data center solutions company, announced today that they have begun the process for attaining a Statement on Standards for Attestation Engagements No. 16, Reporting on Controls at a Service Organization, otherwise known as SSAE 16. While the SSAE 16 attestation is new to the web hosting industry, it will supersede the Statement on Auditing Standards (SAS 70) by June 15, 2011. NationalNet is choosing to remain ahead of the curve in the industry by beginning this process with the Atlanta based CPA firm, Windham Brannon.

As stated by the American Institute of CPAs, “This section [SSAE 16] addresses examination engagements undertaken by a service auditor to report on controls at organizations that provide services to user entities when those controls are likely to be relevant to user entities’ internal control.”

NationalNet has made the commitment to continuing its high level of customer satisfaction and growth within the data center and hosting industry. As part of that mission, attaining the SSAE 16 attestation is just one step toward making sure our customers understand the company’s focus on providing the best services and controls needed to maintain a reliable data center.

“Deep down in our hearts, we know that we do everything necessary to guarantee our customers’ satisfaction every single day” said Tony Morgan, CEO of NationalNet. “Based on what we’re hearing within the data center Industry, we’ve decided to begin the SSAE 16 attestation process.  My team and I have spent the first quarter of this year in meetings with Windham Brannon committing what NationalNet does every day to paper.”

As yet another affirmation to provide outstanding service, NationalNet will complete the SSAE 16 attestation and many other goals within the 2011 calendar year.

About NationalNet

A leader in innovative web hosting and data center solutions, NationalNet is changing the way businesses manage their connectivity.  Whether it’s fully managed hosting or a custom configured colocation space, NationalNet has solutions that exceed expectations and can fit any budget.  NationalNet’s success is built on superior customer service customized to meet the needs of every client.  For more information, please visit www.nationalnet.com, call 678.247.7000, or email sales@nationalnet.com.

About Windham Brannon

We believe Windham Brannon is Atlanta’s premier CPA firm. They offer comprehensive accounting and advisory services, proactive tax consultation and compliance. Windham Brannon serves many of Atlanta’s most successful local companies as well as large, publicly-held corporations. For more information, please visit www.windhambrannon.com.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
22
Feb
2011

High Availability – Is it For You?

by Administrator

A question we get on a regular basis is “How can I assure that my site will never go down no matter what?”  The answer is always the same “There are a number of options available to you depending on your budget and your application”.   I have provided a few options below.  As always, there are a number of ways to accomplish a task. The list below is by no means complete but will give you some ideas on what can be done to ensure that your site is less vulnerable to hardware and/or networking issues.

 

In every case, setting up high availability is going to require duplicating some hardware and may require changing the basic architecture of your site.

1. Round-Robin DNS (RRDNS)

RRNDS is a method where you assign 2 or more IP addresses in DNS to a single domain name.  If DNS sees that there is more than 1 IP address associated with the domain, it will divide the traffic between the IP addresses.  If there are two IP addresses associated with the domain, as requests come in they will be sent in an alternate fashion.  In other words, request number 1 goes to IP number 1.  Request number 2 goes to the second IP…request number 3 goes to the first IP –  well, you get the picture. To set this up, you would mirror your site on two different servers, using one IP on the first server and the second IP on the second server.  You are not limited to just 2 IP addresses per site with this method but each IP must be on a dedicated server.  In other words, if you assigned 4 IP addresses to the site, you would have to set up 4 servers with an identical copy of the site on all four servers.

 

The Advantage: RRDNS is that is a free solution and only requires the additional servers and nothing more – no expensive load balancer or difficult software configurations.

The Disadvantage: is that RRDNS does not know anything regarding the health of the server so if one server goes down, RRDNS will still attempt to send requests to that server, which means that ½ of the requests are going to a dead server.  Some surfers will hit the server that is up and see the site while the other half will hit the dead server and see nothing.  Another disadvantage is that it may require some site structure redesign.  For instance, if you use a CMS to manage your site, you must ensure that the CMS can handle having the site on more than one server.

 

2. Load Balancing

Load balancing is a hardware/software solution that works much like RRDNS except that the site only has one IP address associated with it as opposed to the multiple IP addresses used with RRDNS.  It still requires 2 or more servers with identical copies of the site residing on each server.  The IP address of the site resides on the hardware load balancer.  When a request for the site comes in, the request goes directly to the load balancer.  The load balancer then forwards the request to the least busy server and the server fulfills the request by sending the site content back to the surfer.

 

The Advantage: A load balancer knows about the health of the servers. If a server goes down or is not responding to requests, the load balancer immediately and automatically removes it from the pool and discontinues sending requests to it.  Once the server comes back up, the load balancer automatically puts it back into the pool and starts sending requests to it.

The Disadvantages: Load balancing is an additional cost (we can reduce these costs by putting you on a shared load balancer).  As well, the load balancer now becomes a single point of failure so in order to have true redundancy it requires a secondary load balancer in case the primary fails, thus doubling your cost.  Also, like RRDNS, you must ensure that your site and applications are capable of running on more than one server.

 

3. Disaster Recovery Site (DRS)

DRS is where you mirror your entire infrastructure (servers, network equipment, etc) in a second data center, usually in a data center that is at least 2-300 miles or more from your main location.  The reason for the distance is in case the primary center is subject to a natural disaster.  For instance, if a hurricane brings down your primary location, you want to ensure that the secondary location is also not affected.

 

The Advantage: You have full redundancy that would cover almost any event.

The Disadvantages: It requires that you duplicate your entire infrastructure, thus doubling your costs to protect against an event that may never occur.  Also, setting up a DRS so that it is fully automated requires a serious commitment to programming and architecture.

 

These are just a few of the things you can do to bullet-proof your site.  We also offer Mysql clustering as well as primary/secondary Mysql solutions that can ensure that there is some redundancy in your databases.  As always, should you have any questions about high availability, please contact our sales department.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
09
Nov
2010

Unlimited Hosting – Is There A Catch?

by Administrator

If you have ever done any searching for a web hosting company for your site, you have undoubtedly come across web hosting companies promising “unlimited hosting” for some “to good to be true” price. You asked yourself “how is this even possible” when you see other hosting companies offering the same package for much higher prices. So, is unlimited hosting really true? As far as I know, the only thing that is truly unlimited is outer space. That being the case, how can these hosting companies offer “Unlimited Hosting”.

First, you must consider that their traditional customer, or the customer that they are targeting will not use very much in the way of resources. The typical web site will use minimal bandwidth or traffic, minimal disk space and may have only one or two email addresses. Of course, this is what the host is counting on and this offering is a great way for them to get your business. However, should you start using what they consider too much bandwidth or disk space, you will find a very nice email in your email box telling you that you’re going to have to upgrade some part of your plan because “we found that you are utilizing resources to the point that is affecting other customers” or some other similarly worded email. The key to unlimited hosting is not only knowing, but also understanding, the fine print.

If you read the fine print at many of these hosts (which may not be so easy to find) you will see numerous caveats. Below are just a few of them I’ve found during my research.

 

1.We reserve the right to change the terms of the package at any given time. They usually list some time frame for the notice – usually 21 to 30 days.

2. Email accounts have limited storage capacity. Of course, for a small upgrade fee you may increase your email storage.

 

3. Backups are not included but for a small fee you can add a backup plan

4. Your bandwidth is part of a shared network. This means that they have allocated a pool of bandwidth for the unlimited customers. The downside to this is that any customer could effectively cause your site to run slower due to the lack of unavailable bandwidth.

5. Large videos are not allowed. What they consider a large video is anybody’s guess.

6. Support is not included with this plan.

7. You may not install any scripts which may affect the performance of the server.

8. You can add all the content you wish but maybe not all at the same time. The vast majority of our customer’s sites grow at rates well within our rules, however, and will not be impacted by this constraint. What exactly does this mean?

9. You may not use your disk space as an off-site backup source.

10. Database servers have a limit to the number of concurrent connections.

 

Does this mean that you should never use these hosting companies? Of course not – but you should definitely approach with caution and keep expectations low. If you have a small site that you know won’t use a lot of disk space or server resources, then a small affordable “unlimited hosting” package may be perfect for you. Just remember the phrase “Caveat Emptor” which is Latin for “Let the buyer beware”.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
19
Oct
2010

IPv6 – What It Is and Other Fun Facts

by Administrator

You may have heard that the Internet is quickly running out of IP addresses. If you’re even slightly technical, you have probably also heard that a new IP numbering scheme called IPv6 is on the horizon and it will solve all of our IP address issues…but will it? The short answer is yes, but like everything else in life, it’s a bit more complicated than that.

Let’s start by discussing our current IP address scheme – IPv4. IPv4 is expressed in 4 octets, for example 192.168.1.1 (say it like 192 dot 168 dot 1 dot 1) where each number between the dot is an octet. When this addressing scheme was created in 1980, it was the fourth revision in the development of the Internet protocol and was the first version to be widely deployed. By using these 4 octets it gave IPv4 the ability to have 4,294,967,296 IP address. At the time this was deployed the engineers nodded their collective heads and agreed that 4.2 billion IP addresses would be more than enough and we would never run out. Unfortunately, what they could not foresee was the Internet boom nor the fact that so many devices today utilize IP addresses besides computers, routers and servers. With the explosion of smart devices, including TVs, appliances (does my toaster REALLY need an IP address?) and especially cell phones, IP address usage has skyrocketed. In February 2010, the International Telecommunications Union announced that the number of cell phones worldwide is now over 4.6 billion with “smart phones” (phones that run an operating system and require an IP address) making up a larger and larger chunk of that number. Smart phone usage is growing by leaps and bounds. In the first half of 2010 vendors shipped a total of 119.4 MILLION smart phones, an increase of 55.5% over the first half of 2009.

Now we see the issue… IPv4 space is running out and at an extremely quick rate. Estimates of how long we have before we run out of IPv4 space range from 200-400 days.

So why don’t we just roll out IPv6 space immediately? Unfortunately it’s not as easy as flipping a switch. In order to fully roll out IPv6, EVERY device in the WORLD that utilizes an IP address has to support IPv6 or they won’t be able to access the Internet. There are still many routers, PCs, and servers that are just old enough that they don’t have support for IPv6. Many cable and other Internet providers (ISPs) are looking at doing major upgrades to make sure they can supply their customers with Internet access. NationalNet has worked closely with our providers and are doing both IPv4 and IPv6 with all of them at this time. We are also running IPv6 internally on many devices and are in full readiness to move everything to IPv6 when the time is right.

So, after all of this, are we sure that we have enough IPv6 IP addresses? We certainly don’t ever want to be in this predicament again. When the engineers rolled out IPv4, they didn’t anticipate the issues we have now with IPv4, so did they plan correctly this time? Just how many addresses are there in the IPv6 implementation? Well, that’s easy – it’s 340,282,366,920,938,463,463,374,607,431,768,211,456 IP addresses.

Okay, I know what you’re thinking…that number is too big for you to even get your head around, so let’s say it in words. It’s three hundred forty undecillion, two hundred eighty-two decillion, three hundred sixty-six nonillion, nine hundred twenty octillion, nine hundred thirty-eight septillion, four hundred sixty-three sextillion, four hundred sixty-three quintillion, three hundred seventy-four quadrillion, six hundred seven trillion, four hundred thirty-one billion, seven hundred sixty-eight million, two hundred eleven thousand, four hundred fifty-six.

Yes, I know – that makes it even harder to understand, so Tomas, our Director of Technical Services, broke it down for me. When we received our IPv6 allocation a few months back I was trying to figure out how many IP addresses we had. I knew we had 79,228,162,514,264,337,593,543,950,336 IPs in our allocation but that’s a hard number to wrap your head around so here’s what Tomas told me.

“If you remember, I said that Internet had about 4 billion IPs, so NationalNet’s IPv6 allocation is equivalent to 18,446,744,073,709,551,616 copies of the current Internet.”

OK that number was still too big…so Tomas broke it down even further. “If NationalNet decided to give a copy of the current Internet to everyone on the planet out of our IP space, we could give each person 3,074,457,345 internets a piece.” Now that’s a LOT of IP space. Based on the allocation we have, which is just a small fraction of the overall space, I would say the engineers got it right and we shouldn’t ever run out. Also, if you’re testing IPv6 at home with your current ISP, we do have an IPv6 web site at Also, if you’re testing IPv6 at home with your current ISP, we do have an IPv6 web site at http://ipv6.nationalnet.com.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
05
Oct
2010

NationalNet: A Step Toward Sustainability

by Administrator

NationalNet is proud to join the ranks of other eco-friendly data centers by installing an Eaton Energy Saver System. The new system is built to run our UPS units more efficiently and cut down the amount of unused power. This is a common problem with data center structures, and Eaton has developed a way to minimize the loss; which translates into 99% of usable power. NationalNet - green web hosting, green datacenter

In an excerpt from their case study, “Eaton is changing the game with the revolutionary Energy Saver System. UPSs equipped with this technology deliver 99 percent efficiency or better without sacrificing reliability… The intelligent power core continuously monitors incoming power conditions and balances the need for efficiency with the need for premium protection, to match the conditions of the moment.” You can read more of their case study here.

So, that’s all fine and dandy, but how does it translate to the green initiative? By following the logic of increasing our own power efficiency from 94% to 99%, a data center critical load over 2,000 kW, there is a five-year energy savings (kWhrs) of more than 8,897,000. That equals saving 6064 metric tons of CO2 over a five-year period, or taking more than 1897 cars off the road for a full year!*. Now, that realization is impressive. And as we continue to grow, we will consciously be minimizing our carbon footprint and doing what we can to improve the environment in the data center scheme of things.

This eco-conscience step for NationalNet is one of the biggest that we’ve taken so far. With a history of green internal practices, such as recycling plastics and using high efficiency chillers for our air conditioning units, installing Eaton’s new program just makes sense.

*Figures used in study have been extrapolated based on Eaton’s Energy Saver System report

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
07
Sep
2010

Do I Need a CDN?

by Administrator

What exactly is a CDN?

As someone who runs a hosting company, I get this question on a regular basis as we have a couple of different CDN offerings. Many people have heard of a CDN, usually from a friend or an online article, but few people know what it does or how it works, so let’s start with a definition and overview.

CDN is short for Content Delivery Network. CDNs were created to deal with proximity issues when delivering content from a web site. What this means is that the further away the surfer is from your web site, the slower it will load for them. If your web site is hosted in the US, a surfer in the US is going to see much faster load times than a surfer in Australia, simply because they are closer to the web site server.

A CDN consists of storage servers called “caching servers” or “edge servers” that are strategically placed around the world. Web site content is pulled from the “origin server” (the server that hosts your web site) and is pushed to all the edge servers. When a surfer makes a request for that content, the CDN first determines where the surfer is, then finds the nearest edge server to the surfer and finally confirms that the requested content is on the edge server. If the content exists on that edge server, it delivers that content from the edge server to the surfer, thus providing faster delivery of the CDN content to the surfer. If the CDN determines that the content should exist on the edge server but for some reason does not exist, it will pull it from the origin server and place it on the edge server. By using href tags in the HTML code for the site, the webmaster can control what content should be on the CDN and what content should be delivered directly from the origin server. Now that we know what a CDN is, let’s determine if you need one.

In the description above, we learn that a CDN has to go through a decision process to determine what to do (where is the surfer, does the content exist, etc). This decision process takes a second or two, which can add a delay to the loading of the content. Due to this decision process, it should be apparent that a CDN is not ideal for all sites or all content. A CDN was designed with larger content (files) where a 1 second delay is not critical. A CDN is also designed for “popular content” i.e., content that is accessed often. Edge servers do not have infinite disk space so all CDNs automatically expire or delete content on the edge servers if the content has not been accessed in some time. This time span is configurable by the site owner but usually it’s 30 days. So, it would not make sense to put up content that is only accessed occasionally.

A typical web site is hosted in a single location at a hosting company, and the web site usually consists of web pages of text and images and has been, or SHOULD BE optimized for the best performance. If a web site is properly optimized, it should load pretty quickly from anywhere in the world. Text is very small and most web site images are very small as well. Putting this small content on a CDN will actually defeat the purpose due to the decision process explained above. However, if you have large images, large downloads (zip files, software files, etc) or movie files, a CDN is perfect for this application, provided you have popular content to deliver. We have found that the best use of a CDN is to deliver the web site text and smaller images directly from the origin server and put the larger files on the CDN, thus providing a “best of both worlds” experience for the surfer. Large streaming movies will play much faster on the CDN. Large zip file downloads will download much more quickly on the CDN. The one exception to putting smaller files on the CDN would be a JavaScript file, or some other file that never changes and is always loaded on every page. We have seen some customers use the CDN for JavaScript files and have good success with them.

What a CDN is NOT good for…

CDN edge servers are just very large disk arrays and they are very good at delivering content; however, they are not designed for processing, so you usually cannot put php files or any other scripts/programs that require server side processing. You can put javascript on them because it is processed on the client side by the browser. CDN providers want their edge servers to “shovel content” to the surfer and nothing more. Adding in server side processing would simply slow down the CDN and also create a new level of un-needed complexity.

CDNs are also not cheap. Because of the infrastructure required, as well as the software that runs it, there is a significant investment required to build a CDN. This means that CDN bandwidth can run 2-10 times more than regular bandwidth provided by your hosting company.

Finally, if most of your surfers are in the same area as your web site, i.e., your web site is in the US and most of your surfers are in the US, there is no benefit to having a CDN, as you’ll be paying extra but not really seeing any faster speeds.

So, if you have a web site that delivers larger files, whether streamed or downloaded, you have surfers all over the world and you wish to give them the best possible experience, it may be time to see if YOU need a CDN.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
24
Aug
2010

Server Specs: Do They Really Matter?

by Administrator

“Hi, I’m interested in hosting with you and would like to discuss pricing. What’s the biggest, baddest server (we’ll refer to this server as the “BBS”) you guys offer?” I’m always surprised by how many sales conversations start off this way. Instead of having a discussion about needs and how best to meet them, some webmasters think that if they get a monster box (the BBS), it will just magically take care of everything they need now, or may need in the future, and this mindset could not be more incorrect. This is akin to wanting to open a small corner store but asking your real estate agent to go out there and find the biggest empty building they can find.

As I wrote about in a my previous article 10 Tips to Selecting Your Web Host Company the first thing you need to do is determine your needs. By this, I mean look at your site, any software packages you may be using and other aspects such as databases, disk space needs etc. This will help you narrow down your search for a web host. However, once you have narrowed your search down to 2-3 hosting companies, it’s now time to talk to them and get their input on what you need. Many times the BBS is not what you need at all. There are a few reasons why having a BBS can actually be an issue.

1. Cost. Of course the larger server you get, the more it’s going to cost. A server with Dual processors, 48G of RAM and some giant drive or RAID array will certainly put a dent in your budget.

2. Scalability. You now have all your eggs in this one BBS basket and when it’s time to grow, you’re likely going to have to get another server just like your first BBS – especially if your growth plan includes incorporating load balancing, as load balanced servers should be as close spec-wise as possible.

3. Efficiency. You might have a BBS that only has a single big drive in it, but because your site is very database intensive, you find that this BBS is actually bottlenecking your sites due to the single disk not keeping up. Of course, because all you wanted was a BBS and never talked to your host about the best SOLUTION for you, you’re now faced with possible downtime to change the drive to something that suits your needs better.

Talking to your hosting provider can go a long way towards making sure you’re utilizing the correct hardware. For instance, using example #3 above, after discussions with your host, you may learn that the best solution is actually splitting the load between two smaller servers where one server is a web server and the second server is a dedicated database server. Had you gone with only one BBS, it’s likely that your site performance could have been affected; leaving you yelling at your web host because your site was slow when in reality, the host did exactly what you told them to.

So, when it’s time to select a web host and the server (or servers) for your needs, remember to talk with the host about what’s best for you. Your host does this for a living and any respectable host will be more than happy to work with you to find the best fit for your needs. Speaking personally, I would rather you be a happy customer with a smaller, less expensive server than an unhappy customer with a BBS that’s not meeting your needs for.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
03
Aug
2010

Metered Bandwidth v. Unmetered Bandwidth

by Administrator

In my previous posting, I discussed the difference between throughput (95th percentile) and transfer (per gig). In this article, we’re going to dig a bit deeper and discuss the advantages of metered (un-capped) versus un-metered (capped) bandwidth.

We start with a question from Jonas, a reader, who asks, “With 95 Percentile, can I have my transfer rate limited to some upper limit so that I have a cap on what my bandwidth cost will be each month? I would worry that if I had 2 or 3 days with 10x my normal traffic I would have a heart attack when I get the bill.”

In a word: YES. You can have your bandwidth CAPPED, if you like, and if your hosting company will allow it. With that said, and I can only speak for NationalNet, we do not recommend our clients to cap their bandwidth and this is based on sound business practices. Here are some examples:

Example #1: You own a web site that earns you revenue, either by selling your wares on it or your clients pay a monthly fee to access your content. For whatever reason, your site starts getting an unusually high amount of good traffic, bandwidth goes UP, but so do REVENUES…and thus so do PROFITS! However, if your plan is capped, once you reach your capped limit, surfers are turned away or are greeted with a slow, almost impossible site to use. No one can make purchases on your site, or paying members cannot access your wonderful content, with the end result being LOSS of profit and revenue.

Example #2: You have a site that offers articles or content that others web sites are putting onto their web sites (and possibly paying you for your content) and your site gets listed on digg.com or is discussed on the major news outlets, thus getting hammered with traffic and bandwidth goes up. If you are capped, then it bottlenecks the site and surfers have to WAIT for the page to load until the people in line ahead of them are done…then your clients start complaining, possibly start canceling their service, because their surfers are complaining. Obviously, no one likes losing paying customers.

Of course, the previous two examples make the assumption that the traffic hitting your site is good quality traffic. However, there are times when your bandwidth is being stolen via hotlinking (another web site linking directly to images/content on YOUR server) and this is not desirable. At NationalNet, our monitoring system will alert us to abnormally high bandwidth, our system administrators will investigate and stop the thieves as well as notify you of this high bandwidth. However, you should make it a habit to check your website stats on a daily basis. Not only will this help you understand your traffic and visitors better; you’ll catch any large bandwidth jumps before they can be too costly for you.

So, as you can see, having a capped (unmetered) connection is probably not what you want. The key is making sure that you check your bandwidth stats on a regular basis and that you utilize a host that will watch it for you as well and alert you if the bandwidth starts to exceed your budgeted amount. Also, make sure that you know what bandwidth overages will cost you. Many hosting companies advertise their plans on their site and will list a server and some amount of bandwidth for X amount of dollars, but no where in the plan details does it list what the overages are, and in many cases, those overage charges are considerably more than what the regular commited rate is. Be sure you know what those charges are.

Now, with all of that said, there are times when an unmetered plan is exactly what you want. If you have a site that you know will never exceed your plan unless something really bad is happening, or you have a site that is not revenue driven, or you don’t really care if it’s slow at times, than an unmetered plan may be exactly what you need. Unmetered plans tend to be cheaper as well, due to the fact that the hosting company knows exactly how much bandwidth they must purchase and do not have to purchase extra bandwidth to cover overages and spikes.

 

Traffiic/bandwidth by its very nature is very spiky. On any given day, it goes up and down in fairly wild extremes. For instance, our own bandwidth graphs look like mountains and valleys. Joe surfer gets out of work, and the bandwidth goes up…and keeps going up until about midnight EST, when it starts going down. Special traffic deals, viral marketing, etc, all contribute to this “spikiness” (did I just make up a word?) Any host worth it’s salt must make sure that they have lots of extra bandwidth overhead to cover this spikiness, so that the actions of one or two webmasters does not affect everyone else.

It’s very expensive for a good host to pay for all that “bandwidth overhead”, but in the long run, it’s well worth it.

One final thing to be aware of regarding unmetered/capped plans is that many times these plans are on shared bandwidth. What this means is that the host or provider is actually capped themselves by their upstream providers, or that they have purchased a set amount of bandwidth and continually add customers to this set amount and hope that their customers never use all of the allocation. This is commonly called “overselling”. A good example is a host that has a 1 Gbps connection to their provider but sells 200 10 Mbps plans (the equivalent of 2 Gbps) on that single connection. The risk here is if even ½ of their customers use their entire allocation, all customers are going to suffer due to the lack of bandwidth to go around. Overselling is a risk that some hosts take, but NationalNet will never take. It’s not worth risking our reputation by having even one day where the network is slow due to overselling.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
23
Jul
2010

What is 95th Percentile?

by Administrator

What is this 95th Percentile (or, the difference between throughput and transfer)?

Many, if not most hosting companies sell and bill bandwidth based on a method called the 95th percentile. Many, if not most customers, don’t have a clue what the 95th percentile really is. In this article, I’ll try to shed some light on what 95th percentile is.

In order to explain, we must first understand the difference between the two types of bandwidth billing methods. Those two types are TRANSFER (95th percentile billing) and THROUGHPUT (per-gig billing). Let’s look at them individually….

Throughput is the actual total SIZE of the combined files that are sent by the server. Throughput is sold in Gigabytes (GB) and is an aggregate monthly total. So, for example, let’s say you have a web page called THISPAGE.HTML and the actual page is 25k, on this page you have 3 graphic images that are 25k each which is a total of 100k. If 100,000 people downloaded that page over the course of a month then your Throughput would be calculated as 100kB X 100,000 = 10,000,000kB or 10GB. So for that month your THROUGHPUT would be 10GB. This does not take into account if all 100,000 people hit the server at the same time or were evenly spread out over the course of the month; it is still 10GB of THROUGHPUT for the month.

Now, on to TRANSFER, but before we begin let me state that in *NO* circumstances can you mix Throughput and Transfer. It is physically impossible (it’s like trying to add up gallons and nickles). They are two different things.

TRANSFER is measured in Megabits Per Second (Mbps) and measures how much information is traveling through the Internet “pipe” at any given time. I like to compare TRANSFER to water in a series of water pipes. Imagine that your home PC has a water hose connected to it instead of an Internet connection. The water hose is 1/2″ and is connected to the side of your house where it meets a 2″ pipe and your house is connected to the Water Main, which is a 12″ Pipe. In this example your ½” water hose is your home Internet connection and your 2″ pipe to your house is your ISP and the 12″ water main is the backbone of the Internet. It does not matter how hard you try you are only going to get 1/2″ of water into your PC at any given time because the “pipe” is only a 1/2″ water hose.

Now if I were going to sell you water BY THE GALLON, that would be called Throughput (see above), or I can sell you a PIPE and just charge you for the amount of water that you push through the pipe at any given time…this is called TRANSFER. For example, if I take a measurement right now and you are pushing 1″ of water through the pipe and I look again in five minutes and you are pushing 1″ still and I look again in five more minutes and you are pushing 1/2″ and I look again in five more minutes and you are pushing 2″ then how big of a pipe do you need to accommodate your traffic flow without any water being backed up like a funnel??? You would need a 2″ pipe, but you are not using 2″ all the time, so why do you have to pay for a 2″ pipe all the time?? This is where the 95% comes in.

The 95th percentile (which is an industry standard) simply means that the hosting company will look at your pipe every five minutes and take a reading and add that reading to a long list that they keep for 30 days. At the end of the month that list will contain 8640 readings (there are 12 five minute intervals in an hour, 24 hours a day for 30 days). They will then take that list and sort it from the biggest number to the smallest number so that your largest five minute reading is on the top, the second largest is next, the third largest is next and so on. The top 432 entries (the top 5%) are discarded and the 433rd is considered your “95th Precentile” and that is the number that you pay for. The 95th percentile was designed to help chop off wild peaks and only bill you for what you are sustaining on a regular basis. This is a rolling 30 day number that is constantly changing. In other words, once you get the 8640 data points, every time a new data point is added and the list is sorted, the oldest data point is dropped off.

As for what is more advantageous, it depends on the traffic patterns of your site. THROUGHPUT (95%) is good for almost all sites with very few exceptions. TRANSFER is recommended for sites that have extremely high spikes or very inconsistent traffic. For example, if you have very high traffic every Monday but the rest of the week is very low traffic, then being billed on THROUGHPUT may be the best for you. In this case, you would have lots of big numbers due to that high traffic on Monday, which would create an inflated 95th percentile. However, very few sites have this type of traffic pattern.

With TRANSFER host should provide 95th percentile graphs (usually MRTG graphs which is the industry standard) and you can see your transfer yourself. You should check these graphs every day as they can indicate problems as well as let you know your traffic patterns. You should see highs and lows each day and these patterns of highs and lows should follow the sun. If you see a flat line across the top of the graphs then you know that your hosting company doesn’t have enough bandwidth to handle your needs (and this is much more common than one would think). ***IF YOU ARE BEING BILLED ON 95TH PERCENTILE MAKE SURE YOUR HOSTING COMPANY PROVIDES YOU WITH THOSE GRAPHS*** If they refuse, they obviously have something to hide.

Hopefully this helps you understand what 95th percentile is.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
NationalNet, Inc., Internet - Web Hosting, Marietta, GA
Apache Linux MySQL Cisco CPanel Intel Wowza