FEATURE Transforming metro economics
superchannel linecards must be
deployed to satisfy this traffic
matrix. While an instant bandwidth
capability could reduce the cash
flow impact and the linecards’
capacity would fill eventually,
this type of network would tend
to have considerable “service-
ready” capacity for much of its
operational lifespan. Such capacity
would of course be available to
meet unexpected service demands
– frequently caused by the higher
capacity, dynamic demands we
expect to see from Layer C.
Meanwhile, the right portion
of Figure 5 shows the pitfalls of a
more conventional approach. Here,
bandwidth demands are satisfied
individually using 100-Gbps metro
transponders. But the opex of this
approach scales linearly as demand
increases. This approach also
puts a lot of strain on the network
A B 200 Gbps
A C 100 Gbps
A D 200 Gbps
AB A B
CD C D
FIGURE 5. The metro core segment poses challenges to approaches based
on long haul PICs (left) and conventional 100G transponders (right).
capacity. Embedded in this future
architecture will be data centers
of different sizes housing VNFs,
cached content, and applications.
Today the metro contains
numerous physical appliances,
including routers, switches, and
OTN elements. The future goal of
the metro aggregation architecture
will be to virtualize everything
at Layer 3 and above into VNFs
running on x86 servers (Layer C).
Such an approach would provide
the simplest transport function
possible to and between data
centers with highly scalable optics
as the foundation and converged
digital switching and simple but
high performance packet switching
and forwarding (Layer T).
The three distinct metro
aggregation segments, plus
DCI, are shown in Figure 4.
The metro core segment would
have the most rapid migration to
100 Gbps and could immediately
take advantage of scalable PICs
as the most cost-effective optical
engine for this transformation.
Here we would also see the need
for packet/OTN switching, as
opposed to simple aggregation.
The metro edge segment would
move from 10G to 100G after the
core, and 100-Gbps coherent
interfaces and 10 Gbps would
coexist for a time. The function in
this domain would predominantly be
Layer 1 and 2 traffic aggregation.
The access segment would
continue to use 1- to 10-Gbps,
non-coherent transmission (often
using pluggable optics) and Layer 1
and 2 traffic aggregation functions.
As just mentioned, a PIC-based
approach could provide a scalable
optical engine for the metro core
segment. However, the application
demands a new implementation. For
example, a key characteristic of the
metro core segment is that traffic
patterns are frequently “one to many”
or sometimes “many to many.” Figure
5 (left portion) uses a simplistic
example to illustrate the problem for
a classic “hub and spoke” topology.
Using the 500G PIC technology
as in long haul, multiple pairs of