888.4.NATNET
Facebook Twitter LinkIn Blog Instagram YouTube Google Plus
28
Oct
2013

Mozilla Lightbeam Tool for Firefox Illuminates Who is Watching Web Users

by Administrator

Mozilla Lightbeam Tool for Firefox Illuminates Who is Watching Web Users With online privacy at the top of the public’s mind in the wake of revelations of the US’s NSA surveillance program, Mozilla, the open-source community behind the popular Firefox browser, has launched Lightbeam, an add-on for Firefox that will reveal just who is looking over your shoulder as you browse the internet.

Most web users have long been aware that their digital trail is being tracked, being utilized for targeted advertising. Search for hotels in South Carolina and “miraculously” you’ll be seeing ads for hotels in South Carolina popping up on sites all across your internet travels for weeks to come. Users who install and activate Lightbeam on their computers will be able to view real-time visualizations of the sites’s they’ve visited and the third-party entities that are harvesting their data for commercial purposes.

The add-on allows users to opt-in to anonymously sharing their data, which will go towards producing a “big picture” view of web tracking, revealing the activity of these third-party data aggregators. Mozilla’s executive director, Mark Surman says: “It’s a stake in the ground in terms of letting people know the ways they are being tracked. At Mozilla, we believe everyone should be in control of their user data and privacy and we want people to make informed decisions about their Web experience.” While many are cognizant of cookies installed on one’s computer when visiting a website, many are unaware of third parties’ access to those cookies to glean the interests and browsing history of browsers to build a digital picture of individual users to then use for marketing purposes.

While Firefox and other major browsers provide for the option of disabling cookies and the EU has passed “The Cookie Law,” which requires sites to explicitly state how they will be using users’ data and who they will share it with as well as receiving consent from users prior to allowing cookies to be installed on their computers. For users who have activated Lightbeam, when they visit a website, the add-on creates a real time visualization of all the third parties that are active on that page. As they then browse to a second site, the add-on highlights the third parties that are also active there and shows which third parties have tracked their presence on both sites. The visualization will grow with every site visited.

While according to Mozilla they have had “tremendous pressure” exerted on them by trade bodies who would have preferred to continue their work unobserved and behind the curtain, the group feels it is duty-bound to bring transparency to the Web, particularly in today’s climate of user uneasiness about how their data is being utilized and whether their privacy has been compromised. Site owners who have agreements with third parties that track users of their sites are advised to make sure that you are comfortable with the structure of the relationship becoming public knowledge, as we would predict that the aggregation of this data will reveal relationships that will cost some sites customers. This browser tool also bring to light just how far ad networks and commercial user tracking have come in a short amount of time. Navigating the best practices available to strike a balance between customer-focused service and privacy protection is likely to be a major point of emphasis in the months or years to come from a Department of Justice and Search Engine Optimization perspective as well.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
22
Feb
2011

High Availability – Is it For You?

by Administrator

A question we get on a regular basis is “How can I assure that my site will never go down no matter what?”  The answer is always the same “There are a number of options available to you depending on your budget and your application”.   I have provided a few options below.  As always, there are a number of ways to accomplish a task. The list below is by no means complete but will give you some ideas on what can be done to ensure that your site is less vulnerable to hardware and/or networking issues.

 

In every case, setting up high availability is going to require duplicating some hardware and may require changing the basic architecture of your site.

1. Round-Robin DNS (RRDNS)

RRNDS is a method where you assign 2 or more IP addresses in DNS to a single domain name.  If DNS sees that there is more than 1 IP address associated with the domain, it will divide the traffic between the IP addresses.  If there are two IP addresses associated with the domain, as requests come in they will be sent in an alternate fashion.  In other words, request number 1 goes to IP number 1.  Request number 2 goes to the second IP…request number 3 goes to the first IP –  well, you get the picture. To set this up, you would mirror your site on two different servers, using one IP on the first server and the second IP on the second server.  You are not limited to just 2 IP addresses per site with this method but each IP must be on a dedicated server.  In other words, if you assigned 4 IP addresses to the site, you would have to set up 4 servers with an identical copy of the site on all four servers.

 

The Advantage: RRDNS is that is a free solution and only requires the additional servers and nothing more – no expensive load balancer or difficult software configurations.

The Disadvantage: is that RRDNS does not know anything regarding the health of the server so if one server goes down, RRDNS will still attempt to send requests to that server, which means that ½ of the requests are going to a dead server.  Some surfers will hit the server that is up and see the site while the other half will hit the dead server and see nothing.  Another disadvantage is that it may require some site structure redesign.  For instance, if you use a CMS to manage your site, you must ensure that the CMS can handle having the site on more than one server.

 

2. Load Balancing

Load balancing is a hardware/software solution that works much like RRDNS except that the site only has one IP address associated with it as opposed to the multiple IP addresses used with RRDNS.  It still requires 2 or more servers with identical copies of the site residing on each server.  The IP address of the site resides on the hardware load balancer.  When a request for the site comes in, the request goes directly to the load balancer.  The load balancer then forwards the request to the least busy server and the server fulfills the request by sending the site content back to the surfer.

 

The Advantage: A load balancer knows about the health of the servers. If a server goes down or is not responding to requests, the load balancer immediately and automatically removes it from the pool and discontinues sending requests to it.  Once the server comes back up, the load balancer automatically puts it back into the pool and starts sending requests to it.

The Disadvantages: Load balancing is an additional cost (we can reduce these costs by putting you on a shared load balancer).  As well, the load balancer now becomes a single point of failure so in order to have true redundancy it requires a secondary load balancer in case the primary fails, thus doubling your cost.  Also, like RRDNS, you must ensure that your site and applications are capable of running on more than one server.

 

3. Disaster Recovery Site (DRS)

DRS is where you mirror your entire infrastructure (servers, network equipment, etc) in a second data center, usually in a data center that is at least 2-300 miles or more from your main location.  The reason for the distance is in case the primary center is subject to a natural disaster.  For instance, if a hurricane brings down your primary location, you want to ensure that the secondary location is also not affected.

 

The Advantage: You have full redundancy that would cover almost any event.

The Disadvantages: It requires that you duplicate your entire infrastructure, thus doubling your costs to protect against an event that may never occur.  Also, setting up a DRS so that it is fully automated requires a serious commitment to programming and architecture.

 

These are just a few of the things you can do to bullet-proof your site.  We also offer Mysql clustering as well as primary/secondary Mysql solutions that can ensure that there is some redundancy in your databases.  As always, should you have any questions about high availability, please contact our sales department.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
23
Nov
2010

Offshore Hosting vs. US Based Hosting, A Legal Analysis

by Administrator

by Matthew P. Collins

Which should you choose? Everybody in the hosting business has been asked this question many times and alas, the answer is not an easy one to give. NationalNet is proud to offer both US based hosting and, through its affiliated companies, off shore hosting. This flexibility is ideal as your needs may shift over time. I’m going to take you through the steps in evaluating your offshore or US based hosting needs with some legal considerations for you to keep in mind as you think about these options.

How do you choose whether to use US based or offshore hosting? What do you truly need for your business? Start with this question: what are your main concerns and priorities in looking at a hosting location? Think of your top three priorities for your hosting needs. Let’s look at your results…

US Based Hosting

If you said your top priorities are:

  • Price
  • Speed
  • Ease of Access

This is the normal ranking of priorities, however you shouldn’t stop reading now as there may be other considerations for your review.

The US hosting market is very competitive and has the best bandwidth, colocation and server deals. Furthermore, the diversity of connectivity options is unparalleled. For this reason, the best price is almost always going to be found in US based hosting.

Many customers are looking for the fastest solution to get content into the hands of the surfer. No doubt exists that the closer you are to the customer, the faster the content will be delivered and this explains the popularity of content delivery networks and mirror hosting as options to get the content closer to the surfers. Ask yourself this: where are your customers? Then you can better discuss where you should locate your hosting.

Some customers need access to their equipment on a regular basis, and this is only possible if the equipment is based near your office. You should review your visits to your colocation cage and determine how often these visits occur and how necessary they are in order to evaluate this need.

Offshore Hosting

If you said your top priorities are:

  • Targeting your international visitors
  • Avoiding US Tax authorities
  • Avoiding any interaction with the US government

Then you should look at offshore hosting. Even if you didn’t think of these issues as first priorities, you should certainly consider them in your decision to choose a hosting location. These are important legal considerations for any company to consider.

Targeting your customers is another good reason to choose offshore hosting. Before making any hosting decision, you should spend some time to learn the answer to this important question: where are your customers located? You may find that a surprising number are located in Europe or other important regions of the world. Just as I mentioned above, the closer you are to your customers the faster your content will reach them, so having an offshore solution often makes sense to deliver a superior customer experience to your international visitors.

There are many terrific solutions to help you get closer to your international customer. Just to name a few that you can consider:

  • Mirror your sites offshore;
  • Set up a content delivery system with a specific target of certain international locations;
  • Set up offshore content and sites specifically targeted to local countries and languages.

As you can see, there are a wide variety of good business and customer support reasons to set up offshore hosting.

Some companies want to have their content offshore to avoid US taxes. Beware, just because you are hosting your content offshore, does not mean you don’t have to pay US taxes. Many times a client seeks offshore hosting to avoid US taxes only to be told that going offshore does not eliminate US tax liability. Moreover, if you want to avoid US taxes, you should talk to your tax professional to find out if offshore hosting will accomplish this goal. Don’t assume that just because your server is offshore that you are exempt from US taxes.

Another common misconception is that having the servers offshore means that you can successfully avoid interaction with the US Government. Rest assured that the reach of the US government is long and strong and this assumption is dangerous to make. Talk to your lawyer about the best strategy for you to utilize if you want to avoid US government interactions and make sure to discuss the option of offshore hosting.

In life there are tradeoffs and in the choice of whether to choose US based or offshore hosting, you have to decide which priorities you want to emphasize and then you can better evaluate your hosting needs.

Matthew P. Collins is General Counsel for NationalNet and has practiced law since 1993 with a focus on Internet and hosting business law from the Internet’s inception.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
09
Nov
2010

Unlimited Hosting – Is There A Catch?

by Administrator

If you have ever done any searching for a web hosting company for your site, you have undoubtedly come across web hosting companies promising “unlimited hosting” for some “to good to be true” price. You asked yourself “how is this even possible” when you see other hosting companies offering the same package for much higher prices. So, is unlimited hosting really true? As far as I know, the only thing that is truly unlimited is outer space. That being the case, how can these hosting companies offer “Unlimited Hosting”.

First, you must consider that their traditional customer, or the customer that they are targeting will not use very much in the way of resources. The typical web site will use minimal bandwidth or traffic, minimal disk space and may have only one or two email addresses. Of course, this is what the host is counting on and this offering is a great way for them to get your business. However, should you start using what they consider too much bandwidth or disk space, you will find a very nice email in your email box telling you that you’re going to have to upgrade some part of your plan because “we found that you are utilizing resources to the point that is affecting other customers” or some other similarly worded email. The key to unlimited hosting is not only knowing, but also understanding, the fine print.

If you read the fine print at many of these hosts (which may not be so easy to find) you will see numerous caveats. Below are just a few of them I’ve found during my research.

 

1.We reserve the right to change the terms of the package at any given time. They usually list some time frame for the notice – usually 21 to 30 days.

2. Email accounts have limited storage capacity. Of course, for a small upgrade fee you may increase your email storage.

 

3. Backups are not included but for a small fee you can add a backup plan

4. Your bandwidth is part of a shared network. This means that they have allocated a pool of bandwidth for the unlimited customers. The downside to this is that any customer could effectively cause your site to run slower due to the lack of unavailable bandwidth.

5. Large videos are not allowed. What they consider a large video is anybody’s guess.

6. Support is not included with this plan.

7. You may not install any scripts which may affect the performance of the server.

8. You can add all the content you wish but maybe not all at the same time. The vast majority of our customer’s sites grow at rates well within our rules, however, and will not be impacted by this constraint. What exactly does this mean?

9. You may not use your disk space as an off-site backup source.

10. Database servers have a limit to the number of concurrent connections.

 

Does this mean that you should never use these hosting companies? Of course not – but you should definitely approach with caution and keep expectations low. If you have a small site that you know won’t use a lot of disk space or server resources, then a small affordable “unlimited hosting” package may be perfect for you. Just remember the phrase “Caveat Emptor” which is Latin for “Let the buyer beware”.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
19
Oct
2010

IPv6 – What It Is and Other Fun Facts

by Administrator

You may have heard that the Internet is quickly running out of IP addresses. If you’re even slightly technical, you have probably also heard that a new IP numbering scheme called IPv6 is on the horizon and it will solve all of our IP address issues…but will it? The short answer is yes, but like everything else in life, it’s a bit more complicated than that.

Let’s start by discussing our current IP address scheme – IPv4. IPv4 is expressed in 4 octets, for example 192.168.1.1 (say it like 192 dot 168 dot 1 dot 1) where each number between the dot is an octet. When this addressing scheme was created in 1980, it was the fourth revision in the development of the Internet protocol and was the first version to be widely deployed. By using these 4 octets it gave IPv4 the ability to have 4,294,967,296 IP address. At the time this was deployed the engineers nodded their collective heads and agreed that 4.2 billion IP addresses would be more than enough and we would never run out. Unfortunately, what they could not foresee was the Internet boom nor the fact that so many devices today utilize IP addresses besides computers, routers and servers. With the explosion of smart devices, including TVs, appliances (does my toaster REALLY need an IP address?) and especially cell phones, IP address usage has skyrocketed. In February 2010, the International Telecommunications Union announced that the number of cell phones worldwide is now over 4.6 billion with “smart phones” (phones that run an operating system and require an IP address) making up a larger and larger chunk of that number. Smart phone usage is growing by leaps and bounds. In the first half of 2010 vendors shipped a total of 119.4 MILLION smart phones, an increase of 55.5% over the first half of 2009.

Now we see the issue… IPv4 space is running out and at an extremely quick rate. Estimates of how long we have before we run out of IPv4 space range from 200-400 days.

So why don’t we just roll out IPv6 space immediately? Unfortunately it’s not as easy as flipping a switch. In order to fully roll out IPv6, EVERY device in the WORLD that utilizes an IP address has to support IPv6 or they won’t be able to access the Internet. There are still many routers, PCs, and servers that are just old enough that they don’t have support for IPv6. Many cable and other Internet providers (ISPs) are looking at doing major upgrades to make sure they can supply their customers with Internet access. NationalNet has worked closely with our providers and are doing both IPv4 and IPv6 with all of them at this time. We are also running IPv6 internally on many devices and are in full readiness to move everything to IPv6 when the time is right.

So, after all of this, are we sure that we have enough IPv6 IP addresses? We certainly don’t ever want to be in this predicament again. When the engineers rolled out IPv4, they didn’t anticipate the issues we have now with IPv4, so did they plan correctly this time? Just how many addresses are there in the IPv6 implementation? Well, that’s easy – it’s 340,282,366,920,938,463,463,374,607,431,768,211,456 IP addresses.

Okay, I know what you’re thinking…that number is too big for you to even get your head around, so let’s say it in words. It’s three hundred forty undecillion, two hundred eighty-two decillion, three hundred sixty-six nonillion, nine hundred twenty octillion, nine hundred thirty-eight septillion, four hundred sixty-three sextillion, four hundred sixty-three quintillion, three hundred seventy-four quadrillion, six hundred seven trillion, four hundred thirty-one billion, seven hundred sixty-eight million, two hundred eleven thousand, four hundred fifty-six.

Yes, I know – that makes it even harder to understand, so Tomas, our Director of Technical Services, broke it down for me. When we received our IPv6 allocation a few months back I was trying to figure out how many IP addresses we had. I knew we had 79,228,162,514,264,337,593,543,950,336 IPs in our allocation but that’s a hard number to wrap your head around so here’s what Tomas told me.

“If you remember, I said that Internet had about 4 billion IPs, so NationalNet’s IPv6 allocation is equivalent to 18,446,744,073,709,551,616 copies of the current Internet.”

OK that number was still too big…so Tomas broke it down even further. “If NationalNet decided to give a copy of the current Internet to everyone on the planet out of our IP space, we could give each person 3,074,457,345 internets a piece.” Now that’s a LOT of IP space. Based on the allocation we have, which is just a small fraction of the overall space, I would say the engineers got it right and we shouldn’t ever run out. Also, if you’re testing IPv6 at home with your current ISP, we do have an IPv6 web site at Also, if you’re testing IPv6 at home with your current ISP, we do have an IPv6 web site at http://ipv6.nationalnet.com.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
07
Sep
2010

Do I Need a CDN?

by Administrator

What exactly is a CDN?

As someone who runs a hosting company, I get this question on a regular basis as we have a couple of different CDN offerings. Many people have heard of a CDN, usually from a friend or an online article, but few people know what it does or how it works, so let’s start with a definition and overview.

CDN is short for Content Delivery Network. CDNs were created to deal with proximity issues when delivering content from a web site. What this means is that the further away the surfer is from your web site, the slower it will load for them. If your web site is hosted in the US, a surfer in the US is going to see much faster load times than a surfer in Australia, simply because they are closer to the web site server.

A CDN consists of storage servers called “caching servers” or “edge servers” that are strategically placed around the world. Web site content is pulled from the “origin server” (the server that hosts your web site) and is pushed to all the edge servers. When a surfer makes a request for that content, the CDN first determines where the surfer is, then finds the nearest edge server to the surfer and finally confirms that the requested content is on the edge server. If the content exists on that edge server, it delivers that content from the edge server to the surfer, thus providing faster delivery of the CDN content to the surfer. If the CDN determines that the content should exist on the edge server but for some reason does not exist, it will pull it from the origin server and place it on the edge server. By using href tags in the HTML code for the site, the webmaster can control what content should be on the CDN and what content should be delivered directly from the origin server. Now that we know what a CDN is, let’s determine if you need one.

In the description above, we learn that a CDN has to go through a decision process to determine what to do (where is the surfer, does the content exist, etc). This decision process takes a second or two, which can add a delay to the loading of the content. Due to this decision process, it should be apparent that a CDN is not ideal for all sites or all content. A CDN was designed with larger content (files) where a 1 second delay is not critical. A CDN is also designed for “popular content” i.e., content that is accessed often. Edge servers do not have infinite disk space so all CDNs automatically expire or delete content on the edge servers if the content has not been accessed in some time. This time span is configurable by the site owner but usually it’s 30 days. So, it would not make sense to put up content that is only accessed occasionally.

A typical web site is hosted in a single location at a hosting company, and the web site usually consists of web pages of text and images and has been, or SHOULD BE optimized for the best performance. If a web site is properly optimized, it should load pretty quickly from anywhere in the world. Text is very small and most web site images are very small as well. Putting this small content on a CDN will actually defeat the purpose due to the decision process explained above. However, if you have large images, large downloads (zip files, software files, etc) or movie files, a CDN is perfect for this application, provided you have popular content to deliver. We have found that the best use of a CDN is to deliver the web site text and smaller images directly from the origin server and put the larger files on the CDN, thus providing a “best of both worlds” experience for the surfer. Large streaming movies will play much faster on the CDN. Large zip file downloads will download much more quickly on the CDN. The one exception to putting smaller files on the CDN would be a JavaScript file, or some other file that never changes and is always loaded on every page. We have seen some customers use the CDN for JavaScript files and have good success with them.

What a CDN is NOT good for…

CDN edge servers are just very large disk arrays and they are very good at delivering content; however, they are not designed for processing, so you usually cannot put php files or any other scripts/programs that require server side processing. You can put javascript on them because it is processed on the client side by the browser. CDN providers want their edge servers to “shovel content” to the surfer and nothing more. Adding in server side processing would simply slow down the CDN and also create a new level of un-needed complexity.

CDNs are also not cheap. Because of the infrastructure required, as well as the software that runs it, there is a significant investment required to build a CDN. This means that CDN bandwidth can run 2-10 times more than regular bandwidth provided by your hosting company.

Finally, if most of your surfers are in the same area as your web site, i.e., your web site is in the US and most of your surfers are in the US, there is no benefit to having a CDN, as you’ll be paying extra but not really seeing any faster speeds.

So, if you have a web site that delivers larger files, whether streamed or downloaded, you have surfers all over the world and you wish to give them the best possible experience, it may be time to see if YOU need a CDN.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
03
Aug
2010

Metered Bandwidth v. Unmetered Bandwidth

by Administrator

In my previous posting, I discussed the difference between throughput (95th percentile) and transfer (per gig). In this article, we’re going to dig a bit deeper and discuss the advantages of metered (un-capped) versus un-metered (capped) bandwidth.

We start with a question from Jonas, a reader, who asks, “With 95 Percentile, can I have my transfer rate limited to some upper limit so that I have a cap on what my bandwidth cost will be each month? I would worry that if I had 2 or 3 days with 10x my normal traffic I would have a heart attack when I get the bill.”

In a word: YES. You can have your bandwidth CAPPED, if you like, and if your hosting company will allow it. With that said, and I can only speak for NationalNet, we do not recommend our clients to cap their bandwidth and this is based on sound business practices. Here are some examples:

Example #1: You own a web site that earns you revenue, either by selling your wares on it or your clients pay a monthly fee to access your content. For whatever reason, your site starts getting an unusually high amount of good traffic, bandwidth goes UP, but so do REVENUES…and thus so do PROFITS! However, if your plan is capped, once you reach your capped limit, surfers are turned away or are greeted with a slow, almost impossible site to use. No one can make purchases on your site, or paying members cannot access your wonderful content, with the end result being LOSS of profit and revenue.

Example #2: You have a site that offers articles or content that others web sites are putting onto their web sites (and possibly paying you for your content) and your site gets listed on digg.com or is discussed on the major news outlets, thus getting hammered with traffic and bandwidth goes up. If you are capped, then it bottlenecks the site and surfers have to WAIT for the page to load until the people in line ahead of them are done…then your clients start complaining, possibly start canceling their service, because their surfers are complaining. Obviously, no one likes losing paying customers.

Of course, the previous two examples make the assumption that the traffic hitting your site is good quality traffic. However, there are times when your bandwidth is being stolen via hotlinking (another web site linking directly to images/content on YOUR server) and this is not desirable. At NationalNet, our monitoring system will alert us to abnormally high bandwidth, our system administrators will investigate and stop the thieves as well as notify you of this high bandwidth. However, you should make it a habit to check your website stats on a daily basis. Not only will this help you understand your traffic and visitors better; you’ll catch any large bandwidth jumps before they can be too costly for you.

So, as you can see, having a capped (unmetered) connection is probably not what you want. The key is making sure that you check your bandwidth stats on a regular basis and that you utilize a host that will watch it for you as well and alert you if the bandwidth starts to exceed your budgeted amount. Also, make sure that you know what bandwidth overages will cost you. Many hosting companies advertise their plans on their site and will list a server and some amount of bandwidth for X amount of dollars, but no where in the plan details does it list what the overages are, and in many cases, those overage charges are considerably more than what the regular commited rate is. Be sure you know what those charges are.

Now, with all of that said, there are times when an unmetered plan is exactly what you want. If you have a site that you know will never exceed your plan unless something really bad is happening, or you have a site that is not revenue driven, or you don’t really care if it’s slow at times, than an unmetered plan may be exactly what you need. Unmetered plans tend to be cheaper as well, due to the fact that the hosting company knows exactly how much bandwidth they must purchase and do not have to purchase extra bandwidth to cover overages and spikes.

 

Traffiic/bandwidth by its very nature is very spiky. On any given day, it goes up and down in fairly wild extremes. For instance, our own bandwidth graphs look like mountains and valleys. Joe surfer gets out of work, and the bandwidth goes up…and keeps going up until about midnight EST, when it starts going down. Special traffic deals, viral marketing, etc, all contribute to this “spikiness” (did I just make up a word?) Any host worth it’s salt must make sure that they have lots of extra bandwidth overhead to cover this spikiness, so that the actions of one or two webmasters does not affect everyone else.

It’s very expensive for a good host to pay for all that “bandwidth overhead”, but in the long run, it’s well worth it.

One final thing to be aware of regarding unmetered/capped plans is that many times these plans are on shared bandwidth. What this means is that the host or provider is actually capped themselves by their upstream providers, or that they have purchased a set amount of bandwidth and continually add customers to this set amount and hope that their customers never use all of the allocation. This is commonly called “overselling”. A good example is a host that has a 1 Gbps connection to their provider but sells 200 10 Mbps plans (the equivalent of 2 Gbps) on that single connection. The risk here is if even ½ of their customers use their entire allocation, all customers are going to suffer due to the lack of bandwidth to go around. Overselling is a risk that some hosts take, but NationalNet will never take. It’s not worth risking our reputation by having even one day where the network is slow due to overselling.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
23
Jul
2010

What is 95th Percentile?

by Administrator

What is this 95th Percentile (or, the difference between throughput and transfer)?

Many, if not most hosting companies sell and bill bandwidth based on a method called the 95th percentile. Many, if not most customers, don’t have a clue what the 95th percentile really is. In this article, I’ll try to shed some light on what 95th percentile is.

In order to explain, we must first understand the difference between the two types of bandwidth billing methods. Those two types are TRANSFER (95th percentile billing) and THROUGHPUT (per-gig billing). Let’s look at them individually….

Throughput is the actual total SIZE of the combined files that are sent by the server. Throughput is sold in Gigabytes (GB) and is an aggregate monthly total. So, for example, let’s say you have a web page called THISPAGE.HTML and the actual page is 25k, on this page you have 3 graphic images that are 25k each which is a total of 100k. If 100,000 people downloaded that page over the course of a month then your Throughput would be calculated as 100kB X 100,000 = 10,000,000kB or 10GB. So for that month your THROUGHPUT would be 10GB. This does not take into account if all 100,000 people hit the server at the same time or were evenly spread out over the course of the month; it is still 10GB of THROUGHPUT for the month.

Now, on to TRANSFER, but before we begin let me state that in *NO* circumstances can you mix Throughput and Transfer. It is physically impossible (it’s like trying to add up gallons and nickles). They are two different things.

TRANSFER is measured in Megabits Per Second (Mbps) and measures how much information is traveling through the Internet “pipe” at any given time. I like to compare TRANSFER to water in a series of water pipes. Imagine that your home PC has a water hose connected to it instead of an Internet connection. The water hose is 1/2″ and is connected to the side of your house where it meets a 2″ pipe and your house is connected to the Water Main, which is a 12″ Pipe. In this example your ½” water hose is your home Internet connection and your 2″ pipe to your house is your ISP and the 12″ water main is the backbone of the Internet. It does not matter how hard you try you are only going to get 1/2″ of water into your PC at any given time because the “pipe” is only a 1/2″ water hose.

Now if I were going to sell you water BY THE GALLON, that would be called Throughput (see above), or I can sell you a PIPE and just charge you for the amount of water that you push through the pipe at any given time…this is called TRANSFER. For example, if I take a measurement right now and you are pushing 1″ of water through the pipe and I look again in five minutes and you are pushing 1″ still and I look again in five more minutes and you are pushing 1/2″ and I look again in five more minutes and you are pushing 2″ then how big of a pipe do you need to accommodate your traffic flow without any water being backed up like a funnel??? You would need a 2″ pipe, but you are not using 2″ all the time, so why do you have to pay for a 2″ pipe all the time?? This is where the 95% comes in.

The 95th percentile (which is an industry standard) simply means that the hosting company will look at your pipe every five minutes and take a reading and add that reading to a long list that they keep for 30 days. At the end of the month that list will contain 8640 readings (there are 12 five minute intervals in an hour, 24 hours a day for 30 days). They will then take that list and sort it from the biggest number to the smallest number so that your largest five minute reading is on the top, the second largest is next, the third largest is next and so on. The top 432 entries (the top 5%) are discarded and the 433rd is considered your “95th Precentile” and that is the number that you pay for. The 95th percentile was designed to help chop off wild peaks and only bill you for what you are sustaining on a regular basis. This is a rolling 30 day number that is constantly changing. In other words, once you get the 8640 data points, every time a new data point is added and the list is sorted, the oldest data point is dropped off.

As for what is more advantageous, it depends on the traffic patterns of your site. THROUGHPUT (95%) is good for almost all sites with very few exceptions. TRANSFER is recommended for sites that have extremely high spikes or very inconsistent traffic. For example, if you have very high traffic every Monday but the rest of the week is very low traffic, then being billed on THROUGHPUT may be the best for you. In this case, you would have lots of big numbers due to that high traffic on Monday, which would create an inflated 95th percentile. However, very few sites have this type of traffic pattern.

With TRANSFER host should provide 95th percentile graphs (usually MRTG graphs which is the industry standard) and you can see your transfer yourself. You should check these graphs every day as they can indicate problems as well as let you know your traffic patterns. You should see highs and lows each day and these patterns of highs and lows should follow the sun. If you see a flat line across the top of the graphs then you know that your hosting company doesn’t have enough bandwidth to handle your needs (and this is much more common than one would think). ***IF YOU ARE BEING BILLED ON 95TH PERCENTILE MAKE SURE YOUR HOSTING COMPANY PROVIDES YOU WITH THOSE GRAPHS*** If they refuse, they obviously have something to hide.

Hopefully this helps you understand what 95th percentile is.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
12
Jul
2010

RAID Simplified

by Administrator

If you’re a webmaster or someone that has ever dealt with a server, you have probably heard the term RAID. RAID, which stands for Redundant Array of Inexpensive Disks, or occasionally Redundant Array of Independent Disks, is a way to put 2 or more hard drives together in different configurations to meet certain criteria, for better redundancy, faster speeds or both. While there are many sites on the internet that explain RAID already, many of them are quite technical in nature so this explanation will simplify it by describing each RAID type, what is required and the pros and cons of each. There are actually 13 different RAID types but only 4 that are commonly used. I will cover these 4 in detail.

RAID 0: RAID 0 is sometimes called striping. RAID 0 requires at least 2 drives. Data is written sequentially to all drives, which means that the pieces of a file will be written across all the drives. Because of this, this file can be read from the drives much faster as the reads come from all the drives simultaneously. A RAID 0 works well for a server where increased disk space is desired but redundancy is not an issue. RAID 0 may be used for file servers where a backup file server is also in place in case of data loss.

Pros:

  • Easy to create
  • Fast reads and writes
  • Can be done with only 2 drives
  • Disk capacity is the combined size of the drives (ie, 2 200 GB drives would give you 400 GB of capacity

Cons:

  • No redundancy. If any drive in the RAID set fails, you lose all the data on that drive
  • Not a true “RAID” due to the lack of redundancy (remember, RAID stands for “Redundant Array of Inexpensive Disks”)

 

RAID 1: RAID 1 is mirroring and requires a minimum of 2 drives and the drives must be installed in pairs (2, 4, 6, etc). Each 2 drives is a mirror of each other where all data on each drive is identical to its pair. RAID 1 is perfect for a web server where 95% of the disk access is read from the drive to deliver web content and the other 5% is FTP uploads where speed isn’t really an issue. By default, every managed server that NationalNet builds comes with RAID 1 (for speed and redundancy) unless otherwise specified by the customer, or the server is a database or some other type of specialized server that requires a different type of RAID.

Pros:

  • Twice the read speed of a single drive
  • Perfect for a web server where most of the activity is reads from the disk
  • True redundancy in that if a drive fails, you just replace it and the RAID automatically rebuilds

Cons:

  • Slower writes than a typical RAID
  • The capacity is that of the single disk (ie, 2 200 GB drives in a RAID 1 give you 200 GB of capacity)

 

RAID 5: RAID 5 requires at least 3 drives. The data is written to all drives with sections of the drives dedicated to the parity bits. Without getting too technical, the best way to explain parity bits is that they are in charge of ensuring data written to the disk is correct and not corrupted. Because of the way the data and parity bits are written to all three drives, each drive can fill in for any other failed drive. The capacity of a RAID 5 is N-1 (ie, you lose one drive to the RAID), which means that if you have 4 500 GB drives, your capacity would be 1500 GB. RAID 5 works well where more disk space is required than what can be had with a single drive.

Pros:

  • Highest READ speed of all RAID
  • Good disk speed
  • Good redundancy

Cons:

  • Disk failure can impact performance
  • Slower write speeds
  • Expensive to implement. Requires at least 3 matching drives and an expensive RAID card

 

RAID 10: RAID 10 is two mirrored drives (see RAID 1) striped together (see RAID 0). It requires a minimum of 4 drives to implement and like both RAID 0 and RAID 1, must be done in pairs. RAID 10 is very fast for both reads and writes and works well for servers that require high availability as well as fast read and write disk speeds. A database server would be a good example where you would implement RAID 10.

Pros:

  • Very high disk speeds for both read and write access
  • Given a 4 disk RAID 10, you could lose two drives and not lose any data provided it was one drive from each RAID 1 set in the RAID 10. Given this same 4 disk RAID 10, the failure of one drive would never affect the data

Cons:<

  • Expensive to implement. Requires 4 drives and an expensive RAID card
  • Limited scalability

 

These are the 4 most often used RAID types. Here is a condensed list of the other, lesser used, RAID types. These RAID types are rarely used due to the fact that the disadvantages outweigh the advantages or due to cost constraints or both.

RAID 0+1: Similar to RAID 10 in that it’s a mirror/striping combination but without any redundancy. Any single drive failure causes total loss of data on the failed drive.

RAID 2: Requires expensive specialized disks and uses ECC code. Rarely if ever used.

RAID 3: Uses parallel disk writing method. Requires a minimum of 3 drives and uses 1 drive dedicated to the parity bit (see RAID 5). Very slow after disk failure and does not use disk space very efficiently (lots of wasted disk space)

RAID 4: Independent data disks with one disk dedicated to parity bit. Requires minimum of 3 drives. Very slow disk writes. Difficult to rebuild after a failure.

RAID 6: Very similar to RAID 5 only with a second set of parity bits written, which gives it higher fault tolerance in a mission critical situation. Very complex to implement and very poor write performance. Requires a minimum of 4 drives due to the extra parity bit.

RAID 7: Unlike the other RAID levels, RAID 7 isn’t an open industry standard; it is really a trademarked marketing term of Storage Computer Corporation, used to describe their proprietary RAID design. RAID 7 is based on concepts used in RAID levels 3 and 4, but greatly enhanced to address some of the limitations of those levels

RAID 1E: Simply put, RAID 1E is variation of RAID 10 only with more implementation headaches and less redundancy

RAID 50: Without getting too technical, a RAID 50 is similar to putting a RAID 5 and a RAID 0 together. Better redundancy but a high level of complication to implement and maintain

RAID 53: Very similar to a RAID 5 and RAID 3 put together.

Hopefully, you found this information helpful and maybe, just maybe…when you’re selecting your web hosting company and they ask you if you need RAID, you’ll now be able to hold your own in that part of the conversation.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
25
Jun
2010

10 Tips to Selecting Your Webhosting Company

by Administrator

You have your business plan. You’re working on your web site. You have all the pieces and it’s now time to select a web host. You go to Google.com, and after a bit of searching, you find a myriad of hosts all promising the same thing. How in the world can you make sense of all of the choices, select a host that is dependable enough to keep you running and not kill your business before it even gets started? To help you in your quest for a dependable host, here are the 10 best tips we can give you.

1. Determine Your Needs. Does your web site have particular software requirements that only certain servers or hosting companies can provide? For instance, if you’re doing live streaming then you need to make sure that the hosting company can support live streaming. Your business model may be such that you require 100 email addresses and auto responders, but the host you’re looking at may only allow 50 emails. You may require 10 GB of disk space but the plan that you’ve selected only allows for 5 GB. Itemize your needs on paper before starting your research into web hosting companies and narrow your search based on your particular needs.

2. Shared, Managed Dedicated or Unmanaged Dedicated Now that you have determined your requirements, you need to decide if you need a shared account, managed dedicated or unmanaged dedicated server. Start by going to your list in tip 1. The amount of traffic you expect as well as the disk space required will help you decide. In the beginning, a shared account may be perfect for your needs and can be obtained for as little as $2-3 per month. However, a shared account may come with limitations – you may not be allowed to have databases or certain software programs. Also, when you share a server with other customers, there is always the possibility that other customer web sites could affect the performance or security of your site. Should you decide on shared hosting, make sure the company also offers dedicated servers. Planning for growth can determine which hosting company to chose, so that you’re adequately prepared to move into a dedicated hosting plan when the time is right, and still remain with the same hosting provider. If you require strong security or want the peace of mind in knowing that other customers cannot affect you, then a dedicated server is the way to go. Of course, a dedicated server comes at a much higher price. Bargain dedicated servers can be found for as little $49/month, but beware, because the old adage that “you get what you pay for” is certainly true here. If you decide on a dedicated server, then you have another decision to make – managed or unmanaged. With an unmanaged server you are given full control of the server and will be required to set up any software needed for your site as well as ensuring the server is secure. You may decide that you need a control panel to make server management easier or may determine that your skill set is such that you can do all of your work from the command line. If you decide that a fully-managed server is the way to go, you have an entirely different set of concerns. Many hosts tout their servers as “fully-managed” yet they give you a control panel so that you can do most functions yourself. Their idea of fully-managed is to manage the hardware and operating system updates and maybe provide cursory assistance with minor problems as they arise. Other hosts, NationalNet included, take care of everything for you except your web site and become virtual employees of yours, but of course, this comes at a higher price. To avoid surprises later, be sure to ask questions about what level of support comes with their fully-managed servers.

3. Windows or Linux? The large majority of web hosting plans are built around some version of Linux (CentOS, Redhat, Debian, etc). This is usually not a concern but you should make sure that the hosting plan you have selected comes with an operating system that will support your web site. If you have created your web site in .asp or .NET then you’re going to need a Windows server. Make sure that your “needs” list from tip number 1 includes the operating system and that you select the correct hosting package based on that.

4. Investigate Their Support No matter what type of plan you select, be it shared or dedicated, managed or unmanaged, at some point you’re going to require support. Support is an area you never think about until you need it, and when you do need it, you want it to be fast and effective. As mentioned above, take the time in the pre-sales process to ask questions. What is included with the support? What is the average turn-around time for resolution of issues? What is the process for obtaining support? How many tech support people do they employ? If the host says they have 24/7 support, then call it at 2 AM one morning and confirm. Many hosts that claim 24/7 support in fact use an answering service that cannot do anything for you other than take down your information and then call someone and wake him or her up, thus delaying your support request.

5. Beware of Hosting Review Sites There are literally dozens of hosting review sites on the internet and unfortunately, it’s very difficult to take any of them seriously, mainly because they are all supported by advertising from hosting companies. Just because a web hosting site is listed on a review site and has good reviews from the site does not guarantee that the host is any good. Of course, there are exceptions to every rule and there may be some legitimate review sites, but by and large you should take them with a grain of salt.

6. Location, Location, Location There is another type of webhosting other than shared, unmanaged and managed – colocation. Colocation is where you own your equipment and you rent space from a hosting company. The company provides you with a place to rack your equipment as well as power for the equipment. Optionally, they may also supply you with bandwidth or you may purchase this separately if the data center is carrier neutral. If you’re doing colocation, then location of the data center may be important to you, especially if you wish to personally deal with your equipment. Pick a data center too far away and you’ll be regretting it when you have to do that 2 AM drive to resolve an issue. Most colocation/data centers offer “remote hands” where you instruct them on what you wish to have done. Should you require remote hands, and it’s a guarantee that you may at some point, make sure that the techs are capable. Some data centers use the security guards or other non-tech savvy personnel for remote hands which will certainly delay the resolving of any issues you have. If the remote hands are provided by very qualified technicians then proximity may not be an issue for you.

7. Investigate the Performance and Reliability of Their Network Web site owners at the very least want two things – a site that is fast and a site that is always up. When selecting a host, you should take the time to look at their network. Find out if they have redundancy in their network equipment. If they only have one router and it fails, your site is going to be down until they get it repaired. A good host will have multiple upstream bandwidth providers to protect against the failure of a provider, as well as having the ability to use multiple providers to route around Internet trouble spots. Ask the hosting company to provide you with a test file and a few sites that they host so you can check the speed of their network. Be sure to check all the sites they give you because there is always the possibility that a site is slow through no fault of the host. Don’t hesitate to ask questions about their network including number of providers, type of network equipment and the configuration of it. A properly set up host will be proud to discuss this with you.

8. Ask About Their Other Offerings While most hosts will provide basic hosting services as well as email accounts and maybe backups, there are other services and offerings that may help you make your decision. While you may not require these services in the beginning, you might find yourself wanting them in the future. Some services to consider are backups, business class email, web statistics, support for mobile devices, shopping cart software, content delivery networks (CDN) and cloud storage.

9. Phone A Friend You wouldn’t hesitate to ask a friend about a restaurant or where you can find a good mechanic, so use that same approach here. However, while everyone eats and most people have a car, it may be a bit more difficult to find a friend that actually hosts a web site. The good news is that you’re in luck – there are many forums on the Internet so with a bit of searching you should be able to find one that is frequented by webmasters. Ask around but be aware that many posters are also paid by hosting companies to shill for them. There are also many industry-specific forums so it’s quite likely that you can find a forum full of webmasters running sites similar to yours. Not only will you find good hosts, but you’ll also learn who your competition is.

10. Don’t Make Assumptions/Buyer Beware It’s been said a couple of times before but it bears repeating; ASK QUESTIONS. Don’t assume that because all the other hosts you are looking at provide backups that the one you’re thinking of selecting does. Beware of claims like unlimited bandwidth or unlimited disk space as these always come with some sort of disclaimer, so be sure to read the fine print. Ask them about their Service Level Agreement (SLA) as well as their terms of service. Ask to see a copy of the contract or Master Service Agreement (MSA) as well as any other document you may be asked to sign. Don’t be afraid to have a lawyer read all of the documentation to prevent any “gotchas” later.

 

Hopefully this list, while not all-inclusive, has enough information to help you make the right decisions when selecting a web host for your brand new web site.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
NationalNet, Inc., Internet - Web Hosting, Marietta, GA
Apache Linux MySQL Cisco CPanel Intel Wowza