Network Architecture

Network Architecture

Learn how traffic enters, exits, and traverses our platform — from our redundant Layer 3 core and Internet edge to the Arista switching fabric in every rack.

Extreme MLXe L3 Core Three-Carrier Blend Ohio IX Peering Arista Top-of-Rack

Deterministic by Design

Cloud Propeller’s network is purpose-built, carrier-grade, redundant, and predictable. The Layer 3 core handles public IP routing, upstream transit, and peering, while the Layer 2 fabric inside each rack handles east/west traffic between hosts. Client routing and security policies live inside client-controlled virtual appliances, rather than inside the shared physical core.

Architecture Overview

Designed for Continuous Network Availability

Our network architecture consists of a redundant Extreme Networks MLXe-8 Layer 3 core handling Internet edge and routing, a three-provider blended Internet service on the upstream side with direct peering at Ohio IX, and a redundant Arista top-of-rack pair inside every rack.

Each layer has one clear job and an obvious fallback when hardware fails. The sections below cover each component in detail — how clients get connectivity, the L3 core hardware that routes it, and the switching fabric that moves tenant traffic between hosts.

Cloud Propeller Network Diagram Blended Internet (3 Providers) Immediate upstream redundancy by default Diverse upstream handoffs MLXe Core A Layer 3 Core + Internet Edge MLXe Core B Layer 3 Core + Internet Edge 400 Gbps MCT LAG Ohio IX Peering Direct paths to major cloud & content networks 200 G (2 × 100 Gbps) MLAG per ToR pair Arista ToR A Top of Rack Arista ToR B Top of Rack 400G MLAG · 4 × 100 Gbps Tenant Uplink Options Provisioned at the client / tenant level 1 Gbps (burst to 5 Gbps) 10 Gbps (burst to 50 Gbps) Up to 100 Gbps dedicated Dedicated Spectrum circuits also available via our MMR. No Cross-connect fees.
Upstream Carriers
Spectrum
FairlawnGig
Hurricane Electric
Peering
Direct Connectivity via Ohio IX
Akamai Technologies AS20940 altaFiber AS6181 Amazon AS16509 Apple AS31128 Aunalytics AS54069 Bresco Broadband AS32899 CacheFly AS30081 Charter Communications AS10796 CISP AS16617 Cloudflare AS13335 CNI AS7106 Cologix AS8001 Consolidated Cooperative AS394740 Consolidated Smart Systems AS14978 Designer Brands (DSW) AS63157 Everstream AS19009 FairlawnGig AS396238 Fastly AS54113 FCIX AS33495 First Communications AS2707 Foothills Rural Telephone AS394111 GeoStar Communications AS22062 Glo Fiber AS2731 Great Plains Communications AS13807 Haywire AS397449 Hurricane Electric AS6939 Integrated Network Concepts AS394280 Involta AS14230 IP Pathways AS393775 King Networks AS395848 LightSpeed Technologies AS55077 Linear 1 Technologies AS63343 LTech Solutions AS39937 Meta AS63293 Microsoft AS62866 Momentum Telecom AS25645 Netflix AS40027 Netskrt Systems AS53471 OmniFiber AS15081 OneIT AS64228 Packet Clearing House AS3856 Setson AS397095 Skymesh AS63027 Smart Way Communications AS25787 South Central Power AS394611 US Signal AS26554 ValTech Communications AS394828 Valve (Steam Cache) AS32590 Vision Concept Technology AS393618 City of Wadsworth AS20009 Watch Communications AS46231 WeConnect AS40460
Carriers, Peering & Client Connectivity

Three Providers, One Blended Edge, Over 50k IPv4 Routes at < 2ms latency

Upstream is delivered as a three-provider blended Internet service with direct peering at Ohio IX layered on top.

The three-carrier blend means there is no single-carrier dependency at the physical edge — if one provider has a fault, traffic continues over the others without client intervention. Ohio IX peering shortens paths to major cloud and content networks, cutting latency and unnecessary transit for common destinations. Clients get immediate upstream redundancy by default, with tenant-level uplinks and dedicated Spectrum options available through our meet-me-rack (MMR).

Carrier Blend

  • 100% uptime, three-provider blended Internet service
  • Immediate upstream redundancy by default
  • No single-carrier dependency at the physical edge
  • Designed to keep client connectivity simple and resilient

Ohio IX Peering

  • Direct peering at Ohio IX
  • Lower latency and fewer transit dependencies
  • Shorter paths to major cloud and content networks
  • Direct peering w/ Microsoft, AWS, Apple, Valve, CacheFly, CloudFlare and more

Client Uplinks

  • 1 Gbps (burst to 5 Gbps)
  • 10 Gbps (burst to 50 Gbps)
  • Up to 100 Gbps dedicated available
  • Dedicated Spectrum circuits via MMR — no cross-connect fees
  • Additional DIA & Layer 2 connectivity to all ISPs at Cologix MMR
Layer 3 Core Hardware

Carrier-Grade L3 Core

At the hardware layer, the Extreme Networks MLXe-8 is a carrier-class 8-slot routing platform delivering 3.84 Tbps of switch fabric capacity, 3.2 Tbps of forwarding capacity, 2.38 billion packets per second of routing performance, and support for up to 2.4 million IPv4 routes in hardware FIB. Its redundant management, switch fabric, power, and cooling architecture makes it well suited to high-availability routed core and Internet edge roles.

Note on Brocade / Extreme Networks

Legacy Brocade branding may still appear on the hardware shown; the MLX platform is (as of 2017) part of Extreme Networks.

Extreme Networks MLXe-8 Layer 3 core routers in production, providing Cloud Propeller’s shared routed foundation for upstream transit, peering, and Internet edge connectivity.

Arista — Top of Rack

Low-Latency, Cut-Through East/West Fabric

Arista fabric carries east/west traffic between hosts and north/south traffic back toward the routed core. This layer is intentionally simple, fast, and redundant.

Every compute rack for both Mission Critical Compute (MCC) and General Purpose Compute (GPC) is built around a redundant pair of Arista DCS-7060CX2-32S top-of-rack switches. Hosts are dual-connected into that pair, and the ToR pair uplinks back into the Layer 3 core. This keeps switching local to the rack, minimizes failure domains, and preserves predictable behavior during maintenance or hardware faults.

MCC hosts connect at 4 × 100 Gbps per host; GPC runs at 4 × 10 Gbps per host. Both tiers follow the same operating model: redundant switching, deterministic forwarding, and minimal complexity in the shared physical fabric.

Rack-Level Switching Fabric Uplinks to Layer 3 Core 200 G (2 × 100 Gbps) MLAG Arista ToR #1 100G 100G 100G 100G Arista ToR #2 Redundant ToR Pair 400G MLAG 4 × 100 Gbps Host (quad-connected) Host (quad-connected) Host (quad-connected) Host (quad-connected) Host (quad-connected) Host (quad-connected) MCC (Gen4) 4 × 100 Gbps per host 2 × 100 Gbps to each ToR GPC (Gen3) 4 × 10 Gbps per host 2 × 10 Gbps to each ToR Originally, Gen3 relied on a 10G ToR fabric; as of 2025 both platforms now share the same Arista 100G rack-level topology.

Redundant Fabric

  • Redundant Arista ToR pair in every rack
  • 4 × 100 Gbps Switch-to-Switch MLAGs (including rack-to-rack)
  • Dual-connected hosts
  • Dual uplinks from the rack back to the core
  • Maintenance without rack-wide disruption

Low-Latency Switching

  • Cut-through forwarding for east/west traffic
  • Consistent line-rate behavior under load
  • Deterministic intra-rack and inter-rack performance
  • Simple switching design with limited failure domains

Tiered Host Connectivity

  • MCC: 4 × 100 Gbps per host
  • GPC: 4 × 10 Gbps per host
  • Same Arista switching model across both tiers
  • Host-side bandwidth scales with the platform tier
Layer 2 Fabric

Low-Latency Layer 2 Fabric

Cloud Propeller’s rack fabric is built on Arista DCS-7060CX2-32S switches, a 1RU high-performance top-of-rack platform delivering 6.4 Tbps of throughput. Arista’s cut-through forwarding keeps east/west latency low by design — switching latency as low as 450 ns per hop — complemented by a shared 22 MB packet buffer pool for burst tolerance under load. In practice, this allows Cloud Propeller to maintain a simple, redundant, and deterministic rack-level switching model across both Mission Critical Compute (MCC) and General Purpose Compute (GPC) environments.

Arista DCS-7060CX2-32S top-of-rack switches in production, handling east/west traffic and uplinking each rack back into the Layer 3 core.

Need to Talk Through Connectivity?

If your workloads are sensitive to carrier diversity, peering paths, routed design, or tenant uplink requirements, we are happy to walk through the network with you.

Or view the physical facilities.