Technology wave is nothing but a continuous cycle. We have been through sequential technology cycles where we tend to ‘time capsule’ into future which derived from the past
Mainframes > Open Systems > Virtualization (Hypervisor) > Bare Metal (containers)
When we started the computing in old ages we got into model of ‘start small’ ‘keep all local’ and in a very ‘decentralized’ way. During pre/post .COM era we did tend to trust the CoLo providers. We opened up to Centralization of computing. Data kept being built and managed at CoLo’s. Then Cloud born which is an authentic ‘Centralization’ way of computing.
All the ramblings around the ‘EDGE’ would eat the Cloud in tech circles set me up a curious platform to write this piece
What in the world the EDGE is?
edge computing is a method of accelerating and improving the performance of cloud computing for mobile/end users. Computational and Data processing stack would be running on users’ very end devices and Cloud would be purely consumed for fail safe/recovery/long term storage needs
This one resonates the anti-Cloud vibe, Isn’t? Meaning Decentralization Era again?
Big YES! We would get decentralized yet again. Remember the cycles we touched early in this page!
So why do people think edge computing will blow away the cloud? This claim is made in many online articles. Clint Boulton, for example, writes about it in his Asia Cloud Forum article, ‘Edge Computing Will Blow Away The Cloud’, in March this year. He cites venture capitalist Andrew Levine, a general partner at Andreessen Horowitz, who believes that more computational and data processing resources will move towards “edge devices” – such as driverless cars and drones – which make up at least part of the Internet of Things. Levine prophesies that this will mean the end of the cloud as data processing will move back towards the edge of the network.
In other words, the trend has been up to now to centralise computing within the data centre, while in the past it was often decentralised or localised nearer to the point of use. Levine sees driverless cars as being a data centre; they have more than 200 CPUs working to enable them to operate without going off the road and causing an accident. The nature of autonomous vehicles means that their computing capabilities must be self-contained, and to ensure safety they minimise any reliance they might otherwise have on the cloud. Yet they don’t dispense with it.
The two approaches may in fact end up complementing each other. Part of the argument for bringing data computation back to the edge falls down to increasing data volumes, which lead to ever more frustratingly slow networks. Latency is the culprit. Data is becoming ever larger. So there is going to be more data per transaction, more video and sensor data. Virtual and augmented reality are going to play an increasing part in its growth too. With this growth, latency will become more challenging than it was previously. Furthermore, while it might make sense to put data close to a device such as an autonomous vehicle to eliminate latency, a remote way of storing data via the cloud remains critical
For the last several years, enterprises have focused on cloud computing, and have been developing strategies to “move to the cloud” or at least “expand into the cloud.” It’s been a one-way, straight highway. There’s a sharp left turn coming ahead, where we need to expand our thinking beyond centralization and cloud, and toward location and distributed processing for low-latency and real-time processing. Customer experience won’t simply be defined by a web site experience. The cloud will have its role, but the edge is coming, and it’s going to be big
I’m reminded of an unattributed quote that seems to apply every time a new idea pops up in the world of technology:
“Look back to where you have been, for a clue to where you are going.”