You need to put workloads in places that make operational and economic sense.
When you’ve done that thoughtfully, you’re going to end up with workloads in a mix of places:
AWS. Azure. GCP. And then a variety of on-premises locations–colocation, your own data centers, and edge locations where data must be processed immediately to be useful.
So…yeah. You’ve got compute spread all over the place. How do you make smart decisions about where, architecturally, certain workloads should be placed? And when you’ve sorted that out, how do you handle that operationally?
To help you reduce your stress is our sponsor VeloCloud, a VMware company. Joining us from VeloCloud is Marco Murgia, Senior Director of Product Engineering.
- How cloud and SaaS affect network design and operations
- The pros and cons of connecting your data center to cloud providers such as AWS and Azure
- Stitching together applications and services from disparate public clouds
- Whether engineers need to understand each public cloud’s peculiarities
- The role of mid-mile and last-mile connectivity
- How to take advantage of edge compute
VeloCloud Architecture – VMware VeloCloud
Analyst Reports, White Papers and eBooks – VMware VeloCloud