Blueprint column: Stop the intruders at the door!
by Prayson Pate, Chief Technology Officer, Ensemble, ADVA
Security is one of the biggest concerns about cloud computing. And securing the cloud means stopping intruders at the door by securing its onramp – the edge. How can edge cloud can be securely deployed, automatically, at scale, over public internet?
The bad news is that it’s impossible to be 100% secure, especially when you bring internet threats into the mix.
The good news is that we can make it so difficult for intruders that they move on to easier targets. And we can ensure that we contain and limit the damage if they do get in.
To achieve that requires an automated and layered approach. Automation ensures that policies are up to date, passwords and keys are rotated, and patches and updates are applied. Layering means that breaching one barrier does not give the intruder the keys to the kingdom. Finally, security must be designed in – not tacked on as an afterthought.
Let’s take a closer look at what edge cloud is, and how we can build and deliver it, securely and at scale.Defining and building the edge cloud
Before we continue with the security discussion, let’s talk about what we mean by edge cloud.
Edge cloud is the delivery of cloud resources (compute, networking, and storage) to the perimeter of the network and the usage of those resources for both standard compute loads (micro-cloud) as well as for communications infrastructure (uCPE, SD-WAN, MEC, etc.), as shown below.
For maximum utility, we must build edge cloud in a manner consistent with public cloud. For many applications that means using standard open source components such as Linux, KVM and OpenStack, and supporting both virtual machines and containers.
One of the knocks against OpenStack is its heavy footprint. A standard data center deployment for OpenStack includes one or more servers for the OpenStack controller, with OpenStack agents running on each of the managed nodes.
It’s possible to optimize this model for edge cloud by slimming down the OpenStack controller and running it the same node as the managed resources. In this model, all the cloud resources – compute, storage, networking and control – reside in the same physical device. In other words, it’s a “cloud in a box.” This is a great model for edge cloud, and gives us the benefits of a standard cloud model in a small footprint.Security out of the box
Security at an edge cloud starts when the hosting device or server is installed and initialized. We believe that the best way to accomplish this is with secure zero-touch provisioning (ZTP) of the device over public IP.
The process starts when an unconfigured server is delivered to an end user. Separately, the service provider sends a digital key to the end user. The end user powers up the server and enters the digital key. The edge cloud software builds a secure tunnel from the customer site to the ZTP server, and delivers the security key to identify and authenticate the edge cloud deployment. This step is essential to prevent unauthorized access if the hosting server is delivered to the wrong location. At that point, the site-specific configuration can be applied using the secure tunnel.
The secure tunnel doesn’t go away once the ZTP process completes. The management and orchestration (MANO) software uses the management channel for ongoing control and monitoring of the edge cloud. This approach provides security even when the connectivity is over public IP.Security on the edge cloud
One possible drawback to the distributed compute resources and interface in an edge cloud model is an increased attack surface for hackers. We must defend edge cloud nodes with layered security at the device, including:
• Application layer
– software-based encryption of data plane traffic at Layers 2, 3, or 4 as part of platform, with the addition of third-party firewall/UTM as a part of the service chain
• Management layer
– two-factor authentication at customer site with encryption of management and user tunnels
• Virtualization layer
– safeguard against VM escape (protecting one VM from another, and prevention of rogue management system connectivity to hypervisor) and VNF attestation via checksum validation
• Network layer
– Modern encryption along with Layer 2 and Layer 3 protocols and micro-segmentation to separate management traffic from user traffic, and to protect bothSecurity of the management software
Effective automation of edge cloud deployments requires sophisticated MANO software, including the ZTP machinery. All of this software must be able to communicate with the managed edge cloud nodes, and do so securely. This means the use of modern security gateways to both protect the MANO software, as well as to provide the secure management tunnels for connectivity.
But that’s not enough. The MANO software should support scalable deployments and tenancy. Scalability should be built using modern techniques so that tools like load balancers can be used to support scaleout. Tenancy is a useful tool to separate customers or regions and to contain security breaches.Security is an ongoing process
Hackers aren’t standing still, and neither can we. We must perform ongoing security scans of the software to ensure that vulnerabilities are not introduced. We must also monitor the open source distributions and apply patches as needed. A complete model would include:
Automated source code verification by tools such as Protecode and Black Duck
Automated functional verification by tools such as Nessus and OpenSCAP
Monitoring of vulnerability within open source components such as Linux and OpenStack
Following recommendations from the OpenStack Security Group (OSSG) to identify security vulnerabilities and required patches
Application of patches and updates as neededBuild out the cloud, but secure it
The move to the cloud means embracing multi-cloud models, and that should include edge cloud deployments for optimization of application deployment. But ensuring security at those distributed edge cloud nodes means applying a security in an automated and layered approach. There are tools and methods to realize this approach, but it takes discipline and dedication to do so.
Cisco: IT must align networking to business strategy
IT teams need to pivot from being consumed with maintaining the status quo to becoming an enabler of new business innovation, according to a survey conducted by Cisco of over 2000 IT leaders and network strategists.
"IT teams today are running complex mission critical networks that are increasingly capable of providing rich data. But using that data to improve the operations, security, or business impact of the network requires new tools. That's why IT teams are embracing intent-based networking, AI and machine learning — because the business demands it," said Scott Harrell, SVP and GM, Cisco Enterprise Networking.
Some highlights from Cisco's Global Networking Trends Report and Survey:
IT leaders expect new wireless technologies, IoT and AI-enabled operations, threat detection and remediation to have the biggest impact on their network strategy and design over the next five years.
The top priority for global IT leaders and network strategists is to maximize the business value of IT and more closely align to business needs.
- Almost 40 percent of IT leaders named maximizing IT’s business value as their number one priority, higher than simplifying operations, optimizing employee productivity and minimizing security events.
- In order to achieve this, leaders and strategists believe investing in AI technologies is crucial. Almost 50 percent of network strategists believe increasing the use of analytics and AI will help enable the ideal network.
Intent-based networking is coming, allowing organizations to build on their software-defined networking foundations.
- 41 percent of those surveyed claim to have at least one instance of SDN in at least one of their network domains.
- Only 4 percent of respondents believe their networks have moved beyond software-defined and are intent-based today. However, 35 percent believe their networks will be fully intent-based in two years’ time.
- When asked to indicate where on Cisco’s Digital Network Readiness Model their networks currently operate, only 28 percent indicated they’ve reached a service-driven or intent-based network. However, when asked where their networks will be in two years, 78 respondents believed they would move beyond software-defined towards service-driven and intent-based networks.
Intel intros Tremont microarchitecture
Intel unveiled Tremont, its next-generation, low-power x86 microarchitecture promising significant IPC (instructions per cycle) gains gen-over-gen compared with Intel’s prior low-power x86 architectures.
Tremont is aimed at compact, low-power packages and innovative form factors for client devices, creative applications for the internet of things (IoT), data center products, etc.
Tremont is integrated within a wider set of silicon IPs in Lakefield, which will power innovative devices like the recently announced dual-screen Microsoft Surface Neo. Iy includes several advancements in ISA (instruction set architecture), microarchitecture, security and power management. Specifically, Tremont’s unique 6-wide (2x3-wide clustered) out-of-order decoder in the front end allows for a more efficient feed to the wider back end, which is fundamental for performance.
The announcement was made at this week's Linley Fall Processor Conference 2019 in Silicon Valley.https://newsroom.intel.com/wp-content/uploads/sites/11/2019/10/introducing-intel-tremont-microarchiture.pdf
AWS generated Q3 sales of $9 billion, 35% growth
Amazon Web Services generated Q3 revenue of $8.995 billion, up 35% compared to last year.
Trailing 12 months (TTM) revenue was $32.5 billion.
- During the quarter, AWS announced the general availability of G4 instances, a new graphics processing unit (GPU)-powered Amazon Elastic Compute Cloud (Amazon EC2) instance designed to help accelerate machine learning inference and graphics-intensive workloads, both of which are computationally demanding tasks that benefit from additional GPU acceleration.
- AWS also announced the opening of the AWS Middle East (Bahrain) Region. Developers, startups, and enterprises, as well as government, education, and non-profit organizations, can now run their applications and serve end-users from data centers located in the Middle East. AWS now spans 69 Availability Zones within 22 geographic regions around the world, and has announced plans for ten more Availability Zones and three more AWS Regions in Indonesia, Italy, and South Africa.
- AWS announced a 44% reduction in storage prices for Amazon Elastic File System (Amazon EFS) Infrequent Access (IA) storage class, one of the largest percentage price reductions in AWS history. Amazon EFS is a low-cost, simple to use, fully managed, and cloud-native NFS file system for Linux-based workloads that can be used with AWS services and on-premises resources. Amazon EFS IA is a storage class for Amazon EFS that is designed for files accessed less frequently, enabling customers to reduce storage costs compared to the Amazon EFS Standard storage class. AWS has reduced prices six times thus far in 2019, and this marks the 75th price reduction since its inception.
Juniper posts revenue of $1.1 billion - cloud and enterprise sales rising
Juniper Networks reported Q3 2019 net revenues of $1.133.1 billion, a decrease of 4% year-over-year, and an increase of 3% sequentially. GAAP operating margin was 12.2%, a decrease from 13.6% a year earlier and an increase from 7.5% in the preceding quarter.
GAAP net income was $99.3 million, a decrease of 56% year-over-year, and an increase of 115% sequentially, resulting in diluted earnings per share of $0.29.
Non-GAAP net income was $166.6 million, a decrease of 13% year-over-year, and an increase of 19% sequentially, resulting in non-GAAP diluted earnings per share of $0.48.
“We believe we are executing well in a dynamic environment," said Rami Rahim, Juniper’s, Chief Executive Officer. “While we are encouraged to see improved momentum with our Cloud customers, Service Provider spending remains challenged and we experienced weaker than expected Enterprise orders in the September quarter. Despite this backdrop, we still expect to deliver modest year-over-year growth during the December quarter and remain optimistic regarding our long-term growth prospects.”
Some highlights (yoy comparisons):
- Cloud increased 6% and Enterprise increased 8%, while
- Service Provider declined 17%. The lower than mid-point revenue result was due to greater than anticipated Service Provider weakness.
- On a sequential basis, Enterprise increased 10%, Service Provider increased 1% and Cloud was down 5%.
- Routing decreased 18% year-over-year and 2% sequentially. Switching increased 9% year-over-year and 12% sequentially.
- Security increased 22% year-over-year and 16% sequentially. Our Services business increased 1% year-over-year and was flat sequentially.
- Software revenue increased 13% year-over-year and was approximately 10% of total revenue.
- Of the top 10 customers for the quarter, three were Cloud, six were Service Provider, and one was an Enterprise
Holland's SURF research net evaluates ECI's 1.2Tbps optical transport blade
SURF, the Dutch National Research and Education Network, is testing ECI's Apollo TM1200 1.2T dual channel, programmable blade.
The trial runs over a 1650km link connecting SURF’s main facility in Amsterdam with CERN’s communication center in Geneva.
ECI said the trial demonstrated Apollo’s ability to support live traffic of 300 Gbps per wavelength over predominantly old (G.655) fibers, traversing 22 intermediate nodes without any signal regeneration or RAMAN amplification. Link capacity was increased by roughly 150% by optimizing line-rate modulation.
SURF’s optical backbone, SURFnet 8, was upgraded a couple of years ago to address the astonishing rate of growth in the demand for bandwidth. The search for a new vendor encompassed nine candidates, from which ECI was selected. Based on the Apollo family, SURF continues rebuilding its optical backbone for the future, achieving super-high performance, economic scalability, ease of operations, and a seamless migration from the previous infrastructure. The latter exemplifies an Open Line System (OLS) by carrying both ECI and alien lambdas.
“The TM1200 adds yet another layer of flexibility and programmability to our optical capabilities. With the TM1200 we can now optimize modulation schemes in line with our requirements and the distances transmitted, ensuring optimal use of our fiber capacity,” said Rob Smets, Network Architect at SURF. “We were pleased to discover we could improve link capacity and efficiency by approximately 150% just by replacing the card, even on our ‘old’ (G.655) fibers. With ECI’s help and our continuously updated network capabilities, we will continue to provide our millions of users with the levels of performance and service to which they’ve become accustomed.”
“We understand that today’s operators are under pressure to squeeze the most out of their network infrastructures. Optical backbones will forever be required to support, and exceed, simple low cost per bit transport,” said Christian Erbe, VP Sales EMEA at ECI. “However, there are increasing requirements for openness, programmability and interworking with the packet layer. ECI has a very strong relationship with national research and education networks (NREN) worldwide, and we are proud of our long-lasting partnership with SURF.”
ECI introduced its TM1200, a 1.2T blade (dual 600G channel) for its Apollo DWDM transport systems, enabling programmable, adaptive optical networking.
ECI said its new TM1200 blade delivers unmatched spectral efficiency and elasticity through software controllable continuous modulation. Whereas traditional line-side modulation was only programmable in large increments – such as 100G, 200G or 400G – often relying on different line cards, the new TM1200 delivers software-controlled continuous modulation in 50 Gbps increments up to 600 Gbps line rate, rather than supporting specific modulation schemes. This maximizes capacity in a granular manner to best match client needs and variable channel conditions.
- Optimal return on fiber investment: By operating at the edge of the Shannon limit, the TM1200 squeezes the maximum capacity from each channel on a fiber, delaying the need to add new fiber and optical networking infrastructure.
- Enables a highly adaptive and flexible optical layer: Working in conjunction with ECI's colorless, directionless, contentionless, flexible spectrum ROADMs, and client services aware SDN control, the TM1200 can continuously optimize client traffic to fiber capacity.
- Dynamic restoration: Excess capacity can be allocated dynamically to fully or partially restore client services that are disrupted by fiber or equipment failures elsewhere in the network.
- Power efficiency: At a 600 Gbps line rate, the ECI TM1200 has a 10-fold improvement in power efficiency compared to other solutions, consuming less than 0.18W per Gbps, fully populated.
MWC Los Angeles attracted 22,000 visitors
This week's 2019 MWC Los Angeles event at the Los Angeles Convention Center (LACC) attracted nearly 22,000 attendees from more than 100 countries.
The event, which was hosted by GSMA in partnership with CTIA, reported that over 60% of attendees held senior-level positions, including nearly 2,000 CEOs.
The theme of MWC Los Angeles was "Intelligent Connectivity".
“It’s exciting to see MWC Los Angeles’ identity taking form as a leading event in the region,” said John Hoffman, CEO, GSMA Ltd. “This year’s theme has again marked the importance of 5G; we are on the verge of the critical phase of unlocking infinite possibilities in connecting everyone and everything to a better future.”
MWC 2020 will return to Los Angeles and will take place in the LACC from October 28 - 30
O2 activates 5G across London and Slough with Nokia
O2 activated its 5G network in London and Slough.
Nokia serves as the sole RAN provider to O2 across London.
Nokia said it is working with O2 to execute its intelligence-led rollout strategy, which prioritizes connecting transport hubs, key business areas, and entertainment and sports venues to ensure superior customer experience for local and international visitors.
Brendan O’Reilly, O2’s Chief Technology Officer, said: “As we roll out 5G, our intelligence-led strategy is driven by data and insight to identify where customers will benefit from 5G the most – Nokia is helping us to deliver that in one of the most high-density subscriber areas in the world. The transformational power of 5G yields huge potential for businesses and consumers alike, allowing them to get a head start on competitors.”
Tommi Uitto, President of Mobile Networks at Nokia, said: “Nokia and O2 are bringing 5G to the UK capital, delivering the world’s most iconic venues to devices globally. Nokia has exactly the right technology for this, given our leadership in small cells and, more broadly, end-to-end 5G. It’s great to build on our long-standing relationship with O2 to deliver a superior experience for businesses and consumers alike."
The launch of O2’s live network in the UK marks Nokia’s 15th live 5G network worldwide.