We're excited to announce that we're entering into Phase II of our network build out and we're in the final stages of rolling out some pretty significant changes including some geographic diversity, our overall network infrastructure, and how we complete calls.

As most of you now, we've been working with The Planet in Dallas due to our existing relationship with them for HostGator. While we've been happy there, we have been evaluating some other solutions for a while now since our needs are a little more advanced than traditional hosting to better meet our needs going forward. After evaluating a lot of options from colo with fully private connection to various datacenters to just expanding within The Planet, we've signed an agreement with Dallas-based Softlayer for this phase for a number of reasons.

Dallas will remain our core node, but we're expanding to Washington DC & Seattle!

We still feel Dallas is the best overall "hub" in terms of being centrally located. Users have had no significant problems with it. With that being said, we wanted to expand outwards a little and bring in East and West coast nodes to reduce latency, improve routing, and further enhance call quality and move to what we consider a higher-tier facility.

The challenge with doing this was that we didn't just want to have a bunch of servers spread out throughout the country trying to keep them in sync, etc. Things get very complicated when doing that with any kind of volume.

Softlayer was founded by the original founders of The Planet so there's an existing relationship we've very comfortable with. They've built out three high-end datacenters in major carrier hotels/facilities including Infomart in Dallas, 365 Main in the Washington DC area, and Sabey in Seattle. These facilities are all top notch and in additional to Softlayer's robust network, we have access to basically any carriers we want to deal with on our own since basically every carrier has a presence in these facilities. Some of the other customers found in these facilities include CNet, Ticketmaster, Sun Microsystems, etc.

Why Softlayer

There are a lot of reasons we chose Softlayer even though we were originally leaning towards just going full colo and bringing in connections on our own exclusively. Here are a few:

1) Their public network is VERY built out like The Planet is. This allows us to have very short routes to nearly all ISPs.

2) They have a private network running on a 10Gbps network between all 3 datacenters.

3) We are in the planning stages and will soon take steps to become a CLEC in FL and a switchless CLEC in most other states and expect to be terminating most of our traffic directly to TDM by the end of the year. We needed better/easier access to other carriers. Being in some of the top carrier hubs helps significantly.

4) Just like HostGator, we're not in the hardware business and don't want to be. In terms of new servers, Softlayer guarantees deployment within 2 hours. We can even provide our server images in advance and use their API for adding new ones. This is huge for handling spikes in growth. This was debated quite a bit but ultimately, we want to focus on what we do best and let someone else deal with the hardware side of things while avoiding the capital expenditures related to doing such a high-end buildout like Softlayer has.

Migration

In general, most of you won't notice any changes for a while. We don't plan to structure this the way VT did with switching around, etc. It'll all be managed on the backend. I know some of you may prefer another node, but ultimately we'll be managing the network in the best way for everything as a whole. Letting people switch around can cause a lot of issues. We'll be able to set preferred notes for individual users if they have problems though.

As an example, we have load balancers by Foundry which can even route you to the best place depending on latency or response time while being sensitive to load. What I can say is that some of you on the coasts may see a slight inprovement.

Private Network/Network Management

This is huge for us. In addition to the public internet connectivity, all 3 facilities are connected via a private network (10Gbps).

So basically all our servers and other equipment (load balancers, etc) have connections to both the public network and an isolated private network. This is pretty significant and gives us a lot more control in terms of how we manage things.

Some examples:

1) Everything will be maintained via the private network instead of the public network. Access to manage equipment will be firewalled off from the public network. We have a VPN connection into the private network for completely secure management.

2) Let's say there's a DOS attack. We could simply null route our public IPs being attacked and re-route traffic via the private network and out through one of the other nodes.

3) Some core things can all be kept in one facility (Dallas) and the others can access it not only using the public network (but also the private). So as an example, call logs can be maintained in a central location and accessed via a private (10.10.x.x) IP instead of trying to replicate and sync things across the public internet.

Ultimately this will keep things more secure, give us more tools to work with to remedy problems, keep costs low, and help things run more smoothly in general.

Public Network

Outside of the private network, here is what we have in terms of public connectivity. This is similar to The Planet, but significantly better in some key areas. Beyond this there are dozens of smaller peering relationships with various ISPs. Only 10Gbps+ stuff is listed.

We're now dealing with portable IP blocks and can re-route things very easily as needed for redundancy and control a lot of the routing ourselves.

Here's what we're dealing with in terms of connectivity:

Dallas (Infomart Facility)

10Gbps - Level3
10Gbps - Global Crossing
10Gbps - Internap
10Gbps - Comcast
20Gbps - NTT America
10Gbps - SAVVIS
10Gbps - Equinix (peering)


Washington DC (365 Main Facility)

10Gbps - Level3
10Gbps - Comcast
10Gbps - Equinix (peering)
20Gbps - Internap
10Gbps - NTT America

Seattle (Sabey Facility)

10Gbps - Level3
20Gbps - Internap
10Gbps - Comcast
10Gbps - NTT America
10Gbps - Qwest
10Gbps - SIX (peering)

For those of you interested, here is some more detailed info:

http://softlayer.com/facilities_network_n2.html