888.4.NATNET
Facebook Twitter LinkIn Blog Instagram YouTube Google Plus

Monthly Archives: January 2014

30
Jan
2014

Microsoft Makes A Bold Move With In House Servers And The Open Compute Project

by Bill

server cabinetsMicrosoft manages quite a bit of data for it’s Bing, Windows Azure and Office 365 services and has announced it is quietly utilizing its own server designs, bypassing the products of significant strategic partners like Hewlett-Packard (HP) and Dell. That Microsoft would not want to publicize that they were bypassing their traditional allies is completely understandable, and makes their announcement at the Open Compute Summit in San Jose, California even more surprising. Not only did Microsoft share its previously secret in house server designs, it also announced that it will “open source” these designs and the software, sharing them so that other online entities can use them inside their own data centers as well.

Launched by Facebook in 2011, the Open Compute Project was the result of Facebook’s rapid expansion and the high cost of using off-the-shelf servers to meet their immense data-handling needs. While other large, established players in the market like Apple, Google and Amazon had built data centers around the globe using their own lower cost designs, they each took a proprietary approach was a way to protect a competitive edge. Facebook chose to go the open source route and in so doing it was able to, as Mark Zuckerberg puts it, “blow past what anyone else has done.”

According to Zuckerberg, the utilization of Open Compute Project equipment instead of proprietary products from established server manufacturers has saved the social networking giant $1.2 billion, and with the higher energy efficiency of the open source hardware, Facebook was able to conserve the equivalent annual energy usage of 40,000 homes.

Microsoft has been careful to portray Dell and the HP in a positive light, and while Google and Facebook have their own equipment made by low-cost Asian manufacturers, Microsoft has thus far refused to reveal who has been building their machines. Further, it was announced that Dell and HP would be selling systems based on this open source design, maintaining the ongoing, mutually beneficial relationship these tech giants have long enjoyed together.

After years of trying to maintain control over the whole world of computing, and fighting the notion of open source wherever it could, to the point of being saddled with monumental anti-trust litigation, Microsoft seems to be changing its ways dramatically, as Steve Ballmer exits and new decision makers are coming into place. By sharing its designs and software, Microsoft may push the web forward, helping others build more efficient data centers, while at the same time lowering the cost of producing its custom-built gear, driving its hardware costs lower by increasing its ubiquity.

The open source movement also can benefit Microsoft by helping it sell more software, as the software that underpins Microsoft’s cloud services like Azure is designed to run on these servers, why wouldn’t developers use Microsoft’s software when implementing their own services?

In addition to Microsoft joining the alliance, a similar announcement was made by IBM, bringing the corporate membership within the organization to 150, and includes many tech heavyweights like Advanced Micro Devices and Seagate Technology as well. NationalNet will continue to utilize the best and most efficient servers available at the leading edge of the market to provide our clients with the fastest, most efficient and affordable throughput – whether that evolves into open source servers or not, time will only tell.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
27
Jan
2014

FCC Net Neutrality Rules Derailed By Federal Court of Appeals

by Bill

network cableSince the earliest days of internet connectivity, ISPs and broadband providers have been providing transfer speeds in a completely neutral manner. Whether a site is owned by a client or a competitor, the FCC rules on ‘Net Neutrality’ have preserved this overarching method of content distribution, even in the face of very powerful market forces. Carriers have long chafed at the idea that they could not charge premium fees for faster service or throttle distribution of some services that they argue are profiting from their inability to price their bandwidth pipes effectively.

One notable example is Netflix, a site that now accounts for a significant percentage of all internet traffic, which has benefited tremendously from the fact that it can provide clear video streaming services to customers without paying any kind of additional fee for the massive amount of infrastructure it utilizes each day. Other services owned by Google like YouTube, Gmail and Hangouts also rely heavily on their ability to serve content instantaneously to clients across point to point infrastructure that rarely belongs to them. Shouldn’t Comcast, Verizon and others be able to monetize their products to the full extent the market will bear, without government intervention preventing their full profitability, they argued – and the court agreed.

In a decision this month, the United States Court of Appeals for the District of Columbia Circuit ended net neutrality when it decided case No. 11-1355 in favor of Verizon and against the Federal Communications Commission (FCC) after Verizon appealed a lower court ruling that had gone in the FCC’s favor. While some may argue that the case is headed for the Supreme Court in some form or another, or that other methods exist for the FCC to create the same kind of regulations in a more legally justifiable way – rumors persist that the FCC does not intend to do so and that the Supreme Court is likely to side with the carriers as well.

Some pundits are already panicking and calling the ruling ‘the end of the internet as we know it’, while others are taking a more serious business-minded approach. One thing to keep in mind is that the carriers do not derive revenue by putting site owners out of business, and the FTC is unlikely to allow predatory pricing practices to get out of hand. That makes the notion that carriers are about to choke-off your ability to distribute content extremely unlikely. What may be much more likely is the birth of a tiered traffic system that follows the freemium pricing model now being used by many other sectors of the market.

Imagine paying Netflix $9.00 per month for basic service and watching standard definition video, or having access to three videos per day, with a $15.00 per month premium package allowing you to watch as much as you like, and see it all in HD. You may not need to imagine it for long, because it is one of the most likely real word outcomes to be spurred by this ruling.

From Netflix perspective it will be a major marketing obstacle. The premium service pricing will likely be seen by many as a money grab (the same sort of viewpoint that almost sank their brand entirely when they tried to change pricing models last time). However, in reality, most if not all of that premium fee will be forwarded along to the carriers as a way to pay the increased costs which will now be allowed without net neutrality in place.

From the carrier perspective, it represents an enormous amount of additional revenue with little or zero additional expense. And their spin-agents will be hard at work trying to convince the world that this is not a case of them getting extra money now, it’s a case of them having not gotten their fair due over the past two decades until now. A very hard pitch to make successfully, whether you find truth in it or not.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
13
Jan
2014

Intel Announces New Edison 22nm Dual-Core PC The Size Of An SD Card

by Bill

modern computingThe biggest announcement to come out of CES 2014 may be one of the smallest items exhibited this year. Intel CEO Brian Krzanich introduced Edison, a complete functional miniature computer that runs Linux with built-in WiFi and Bluetooth modules but occupies the same amount of space as a standard SD Card.

Edison is clearly aimed at powering the next generation of wearable devices and its miniaturized form factor may be a possibility for virtually any device developers want to add automation or connectivity to in the future. Often hardware manufacturers find developers slow to adopt new platforms, so Intel has taken some extraordinary steps to encourage robust interest from third-parties capable of making Edison a massive new technology revenue stream for Intel.

Nursery 2.0 is the name Intel used to describe a small collection of apps developed in-house to show the kinds of things Edison can do. They included a toy frog that reports vital signs of an infant in a home nursery to a wirelessly connected LED coffee cup, and a milk warmer that immediately begins making a fresh baby bottle the moment the toy frog hears the baby cry.

That may all sound nice, but what about cold hard cash? Intel answered that question by announcing a Make It Wearable contest that will include $1,300,000 in prizes for developers to win. First prize will be $500,000 cash and the contest is expected to begin by June of this year.

Beyond wearables, the implication of this powerful self-contained computer size is a future where computing power becomes even more modular, allowing consumers to extend the functionality or almost any powered device with a simple cartridge-like installation process.

NationalNet continues to monitor hardware development by mainline companies and independent start-ups to provide our customers with the best service, most reliable support and affordable pricing through innovation. Edison may not find its way into the servers of our collocated data center in a matter of weeks, but the technology will undoubtedly affect the way digital business is done in the future.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
06
Jan
2014

How Far Have Hard Drives Come In 50 Years? A Look Back At RAMAC

by Bill

You may not even be aware of it most of the time unless you pause to think about it, but almost every device you handle these days has some kind of hard drive in it or is connected to a system of hard drives via a computing cloud. Everything from telephones and tablets to coffee makers and computers make use of ubiquitous hard drive technology that continues to evolve at a break-taking pace. Technologists are fond of looking forward and attempting to predict the future, but as a year comes to a close we chose to take a look back instead at just how far hard drives have come along in the last fifty years.

Originally developed by IBM, during five years of research and development, and finally made public in 1956, the first hard drive was bigger than a refrigerator and weighed more than a ton. It was known as the IBM 305 “RAMAC” – shorthand for “Random Access Method of Accounting and Control.” The promotional video shown below gives you a great idea of the size and technological innovation that went into creating the first device of its kind.


To this day, the original RAMAC remains on display at the Computer History Museum in Mountain View California. The system utilized fifty vertically stacked disks covered in magnetic paint, spinning at speeds in excess of 1,200 RPM with a mechanical arm capable of storing or retrieving data from the disks by changing the magnetic orientation of particular locations on each of the disks and reading the present configurations.

Big Blue built the system “to keep business accounts up to date and make them available, not monthly or even daily, but immediately.” It was meant to rescue companies from a veritable blizzard of paper records, so adorably demonstrated in the film by a toy penguin trapped in a faux snow storm.

Before RAMAC, as the film explains, most businesses kept track of inventory, payroll, budgets, and other bits of business info on good old fashioned paper stuffed into filing cabinets. Or if they were lucky, they had a massive computer that could store data on spools of magnetic tape. But tape wasn’t the easiest to deal with. You couldn’t get to one piece of data on a tape without spooling past all the data that came before it.

Then RAMAC gave the world what’s called magnetic disk storage, which let you retrieve any piece of data without delay. The system’s hard drive included 50 vertically stacked disks covered in magnetic paint, and as they spun around — at speeds of 1,200 rpm — a mechanical arm could store and retrieve data from the disks. The device stored data by changing the magnetic orientation of a particular spot on a disk, and then retrieve it at any time by reading the orientation at a rate of 100,000 bits per second. The entire one ton device held a grand total of 5 MB of data. Now barely enough to record a single music file and nowhere near the storage necessary to handle video or the other modern day uses of digital storage – in 1956 it represented a virtually unlimited amount of storage for accounting departments that wanted to access plain numerical values quickly in the days before spreadsheets were even a distant imaginary advancement.

RAMAC was the genesis of tech that lead to almost every hard drive available today along with software applications like the relational database which made information storage and retrieval the killer app of computing in the 1980s.

When the RAMAC was installed at the Mountain View museum, remnant data was discovered on the drive from a Canadian insurance company along with some statistical information from the 1963 Major League Baseball World Series. According to researchers “The RAMAC data is thermodynamically stable for longer than the expected lifetime of the universe”, and while 5mb of storage in a one ton machine may now seem like a waste of space – if it wasn’t for RAMAC and the advancements it spawned, you’d still be reading this article on a piece of paper while trying not to smudge the ink on the page.

Share and Enjoy
  • Print
  • Facebook
  • Twitter
  • Add to favorites
  • RSS
  • Google Bookmarks
  • Technorati
  • Yahoo! Buzz
NationalNet, Inc., Internet - Web Hosting, Marietta, GA
Apache Linux MySQL Cisco CPanel Intel Wowza