Depending on the statistics you read, between 2.5 and 3.5 quintillion bytes of data are generated daily. While your business generates a minuscule slice of this, it’s all relative when you can’t manage it – you might as well be generating the entire 3.5 quintillion bytes if you can’t manage several gigabytes.

The IT world has conjured up a phrase for the sheer force behind data: data gravity. As Einstein’s general theory of relativity states, gravity is linked to mass, and “data gravity” is the same. Although we can’t quantify data’s mass, we can say that more of it considerably affects storage, capacity, time, and resources.

For enterprises wanting to be digital first – where they base operational models and executive decisions on data – overcoming data gravity is critical to controlling it. What does this mean? It means getting to grips with the amount of data generated and the size of the data processing window so that data is processed efficiently.

The problem of data gravity

The amount of data enterprises generate is enormous. Something as simple as a warehouse camera feed can generate hundreds of megabytes per second, and this data needs processing in less than a second for data environments to function.

More data means having less time to manage it, reducing the window to create value from it. When data gravity gets so high, data can get sucked into a black hole where no value or logic can escape.

You can think of the problem of data gravity like this – the more data you generate, the less time you have to process it. You can keep adding resources and time to manage it, but it is impossible to keep up with growing demand.

One solution is edge architecture. By keeping compute, storage, data management, and control at the edge, you can handle high-volume data formats for quick decision-making and eliminate technology-induced latency.

The whole point of edge computing is that it is a distributed, open IT architecture designed to decentralise computation and storage. It addresses data gravity with data-stream acceleration and real-time processing close to the data source rather than in a central and remote data centre, which speeds up operations.

Data gravity’s strong force requires edge computing and applications, services, and frameworks that process and manage data in real-time, i.e., milliseconds. This is a barrier to becoming data-first because it requires digital transformation on an epic scale, but it is by no means an impossible task.

To discuss how DSI can help you with a data-first strategy for your business give us a call.