Description
SOLUTION: Data Center Networks
Get cloud agility and scale across multivendor environments.
SOLUTION: DCI Networking
Grow and accelerate cloud services without the usual DCI limitations.
DATA CENTER DESIGN
Get tips and a checklist for building an architecture that reduces complexity.
LEADER
Learn why Gartner named Juniper a Leader in the 2020 Magic Quadrant for Data Center and Cloud Networking.
Features
Deep Buffer with HMC
Absorbs potential network traffic spikes.
Universal Deployment
Offers diverse options for a wide range of deployments, such as data center spine, data center edge, and DCI.
Optimized Power Profile
Has an optimized power profile per 100GbE port for reduced OpEx with the QFX10002-60C.
Seamless Transitions
Lets you evolve from 10GbE to 40GbE to 100GbE, protecting your investment.
Fabric Management
Juniper’s Apstra solution provides full Day 0/1/2 capabilities for IP/EVPN fabrics.
Architectural Flexibility
Supports EVPN/VXLAN, IP Fabric, and multichassis link aggregation group (MC-LAG) for Layer 2 and Layer 3 networks, delivering the flexibility needed to quickly deploy and support new services.
Precision Time Protocol
Supports financial trading and time-stamping packets for synchronization.
Overview
QFX10002 fixed-configuration switches are very dense, highly scalable platforms designed for today’s most demanding data center environments. They’re loaded with power and functionality, all housed in a compact, 2 U form factor.
The switches feature a deep buffer with Hybrid Memory Cube (HMC) to absorb network traffic spikes and reduce application latency. They also support 10GbE, 40GbE, and 100GbE in the same platform, giving you the flexibility to transition to greater network capacity as your traffic demands increase.
The QFX10000 line of switches provides universal building blocks for industry standard architectures such as spine-and-leaf and Layer 3 fabrics. Juniper’s Apstra solution provides a comprehensive fabric management solution, providing intent-driven automation for deployment along with robust monitoring for assured operation.
QFX10002 switches, which use our custom silicon Q5 ASICs, come in three models:
- QFX10002-36Q for data center spine and core deployments: This 2 U switch provides 36x40GbE or 12x100GbE ports for throughput of up to 2.88 Tbps.
- QFX10002-72Q for data center spine and core deployments: This 2 U switch provides 72x40GbE or 24x100GbE ports for throughput of up to 5.76 Tbps.
- QFX10002-60C for data center spine, data center edge, and data center interconnect (DCI) deployments: This 2 U switch offers 60x100GbE ports for throughput of up to 6 Tbps.
Specifications
Form Factor | Fixed 2 U core/spine |
---|---|
Dimensions (W x H x D) | 17.4 x 3.46 x 31 in (44.2 x 8.8 x 78.7 cm) |
Switching capacity | QFX10002-36Q: 2.88 Tbps/1 Bpps QFX10002-72Q: 5.76 Tbps/2 Bpps QFX10002-60C: 12 Tbps/4 Bpps |
Port densities (10/25/40/50/100/400GbE) | QFX10002-36Q/72Q: 144/288 x 10GbE QFX10002-36Q/72Q: 36/72 x 40GbE QFX10002-36Q/72Q: 12/24 x 100GbE QFX10002-60C: 192 x 10GbE QFX10002-60C: 60 x 40/100GbE |
Power consumption | QFX10002-36Q: 800W (max), 560W (typical) QFX10002-72Q: 1425W (max), 2000 (typical) QFX10002-60C: 2500W (max), 2000W (typical) |
Buffer capacity | QFX10002-36Q: 12 GB QFX10002-72Q: 24GB QFX10002-60C: 24GB |
MAC addresses | QFX10002-36Q: 256,000 QFX10002-72Q: 512,000 QFX10002-60C: 1,000,000 |
IPv4 unicast/multicast routes | 128,000 |
IPv6 unicast/multicast routes | 128,000 |
Number of VLANs | 4,000 |
ARP entries | QFX10002-36Q: 192,000 QFX10002-72Q: 340,000 QFX10002-60C: 340,000 |
Latency | 2.5 ms within PFE 5.5 ms across PFE |
Overlay Management and Protocols | QFX10002-36Q, QFX10002-72Q: Contrail Networking, VMware NSX, VXLAN L2 & L3 Gateway, VXLAN OVSDB, EVPN-VXLAN QFX10002-60C: Contrail Networking, VMware NSX, VXLAN L2 & L3 Gateway, VXLAN OVSDB, EVPN-VXLAN, EVPN mulithoming |