Carrier Blend
- 100% uptime, three-provider blended Internet service
- Immediate upstream redundancy by default
- No single-carrier dependency at the physical edge
- Designed to keep client connectivity simple and resilient
Learn how traffic enters, exits, and traverses our platform — from our redundant Layer 3 core and Internet edge to the Arista switching fabric in every rack.
Cloud Propeller’s network is purpose-built, carrier-grade, redundant, and predictable. The Layer 3 core handles public IP routing, upstream transit, and peering, while the Layer 2 fabric inside each rack handles east/west traffic between hosts. Client routing and security policies live inside client-controlled virtual appliances, rather than inside the shared physical core.
Our network architecture consists of a redundant Extreme Networks MLXe-8 Layer 3 core handling Internet edge and routing, a three-provider blended Internet service on the upstream side with direct peering at Ohio IX, and a redundant Arista top-of-rack pair inside every rack.
Each layer has one clear job and an obvious fallback when hardware fails. The sections below cover each component in detail — how clients get connectivity, the L3 core hardware that routes it, and the switching fabric that moves tenant traffic between hosts.
AS20940
AS6181
AS16509
AS31128
AS54069
AS32899
AS30081
AS10796
AS16617
AS13335
AS7106
AS8001
AS394740
AS14978
AS63157
AS19009
AS396238
AS54113
AS33495
AS2707
AS394111
AS22062
AS2731
AS13807
AS397449
AS6939
AS394280
AS14230
AS393775
AS395848
AS55077
AS63343
AS39937
AS63293
AS62866
AS25645
AS40027
AS53471
AS15081
AS64228
AS3856
AS397095
AS63027
AS25787
AS394611
AS26554
AS394828
AS32590
AS393618
AS20009
AS46231
AS40460
Upstream is delivered as a three-provider blended Internet service with direct peering at Ohio IX layered on top.
The three-carrier blend means there is no single-carrier dependency at the physical edge — if one provider has a fault, traffic continues over the others without client intervention. Ohio IX peering shortens paths to major cloud and content networks, cutting latency and unnecessary transit for common destinations. Clients get immediate upstream redundancy by default, with tenant-level uplinks and dedicated Spectrum options available through our meet-me-rack (MMR).
At the hardware layer, the Extreme Networks MLXe-8 is a carrier-class 8-slot routing platform delivering 3.84 Tbps of switch fabric capacity, 3.2 Tbps of forwarding capacity, 2.38 billion packets per second of routing performance, and support for up to 2.4 million IPv4 routes in hardware FIB. Its redundant management, switch fabric, power, and cooling architecture makes it well suited to high-availability routed core and Internet edge roles.
Legacy Brocade branding may still appear on the hardware shown; the MLX platform is (as of 2017) part of Extreme Networks.
Arista fabric carries east/west traffic between hosts and north/south traffic back toward the routed core. This layer is intentionally simple, fast, and redundant.
Every compute rack for both Mission Critical Compute (MCC) and General Purpose Compute (GPC) is built around a redundant pair of Arista DCS-7060CX2-32S top-of-rack switches. Hosts are dual-connected into that pair, and the ToR pair uplinks back into the Layer 3 core. This keeps switching local to the rack, minimizes failure domains, and preserves predictable behavior during maintenance or hardware faults.
MCC hosts connect at 4 × 100 Gbps per host; GPC runs at 4 × 10 Gbps per host. Both tiers follow the same operating model: redundant switching, deterministic forwarding, and minimal complexity in the shared physical fabric.
Cloud Propeller’s rack fabric is built on Arista DCS-7060CX2-32S switches, a 1RU high-performance top-of-rack platform delivering 6.4 Tbps of throughput. Arista’s cut-through forwarding keeps east/west latency low by design — switching latency as low as 450 ns per hop — complemented by a shared 22 MB packet buffer pool for burst tolerance under load. In practice, this allows Cloud Propeller to maintain a simple, redundant, and deterministic rack-level switching model across both Mission Critical Compute (MCC) and General Purpose Compute (GPC) environments.
If your workloads are sensitive to carrier diversity, peering paths, routed design, or tenant uplink requirements, we are happy to walk through the network with you.
Or view the physical facilities.