CEO: Demand For Computing Power Is Growing And Distributed Cloud Network Is The Way To Go

Last year we have seen a number of innovative blockchain projects popping up to push traditional technologies to their very limits. Cloud computing was one of the industries tackled. Golem and Elastic generated a lot of excitement within the community but as their development seemingly slowed down, a new project has emerged to push the frontier forward.

To learn the latest trends in the industry and better understand the challenges decentralized supercomputers are facing we talked to Dr. Gilles Fedak, a French academic and co-founder of with more than ten years of research in parallel and distributed computing under his belt.

FL: Someone from outside the industry would imagine that HPC is mostly employed for rendering, large-scale simulations, scientific research and deep learning. What are some emerging venues where individual users might need supercomputers’ additional power?

Gilles Fedak: It’s actually a really exciting field to be in right now because demand for computing power is growing as more and more industries embrace big data and graphical content. For individual users, I don’t know. But HPC is still too expensive and too complex for innovative small businesses like start-ups. The Internet of Things (IoT) is taking off with huge computational and infrastructure needs and we are also seeing a rise in Distributed Applications known as “DApps” that are growing up around blockchain and smart contract technology. What we are doing with is lowering the barriers to entry for those who want to access to high performance computing (HPC) which is now possible with a distributed cloud network, and we hope that such a network will meet the needs of those who feel a distributed cloud more adequately serves their distributed business models.

FL: What are the main benefits of decentralized supercomputers when compared to centralized cloud services? What are the drawbacks? Any old issues still unsolved?

Gilles Fedak: One of the main benefits of using is lowered costs. And this can be obtained because is a market network for resource providers. Because it’s an open market, it will foster more competition between the providers, thus favouring the most competitive Cloud providers.

A distributed cloud is also cheaper because it does not rely on huge data centre – but rather the data is distributed across multiple network participants. This allows for more effective solution like “off- the-wall” data-center. For example, we see more and more innovative solutions where the servers are installed within the home buildings, which allows for: free cooling, zero construction cost, and sometime energy cogeneration of energy (like the heat dissipated by the processor can be used to warm water). Furthermore, the likelihood of a node being located closer to home is therefore higher and so can be more efficient for Big Data processing. Because the running costs are lower this will bring the price down for users wishing to access HPC services.

Distributed computing systems can also operate more efficiently than their centralised counterparts because they harness computing resources from across the network that would otherwise be wasted. (For example – downtime can be used to bring in extra revenue?) Then there are some obvious security benefits. Many people are not happy to use cloud services for their important digital artefacts because of the security risk – a centralised service is much easier to attack because the data is all on one place. With a distributed cloud we dispense with that single attack vector and owing the blockchain technology we are also able to secure data cryptographically.

FL: Why do we need several decentralized cloud computing platforms at this point? What does your project bring to the table that is unique?

Gilles Fedak: We are launching a new paradigm for cloud computing and just as there are many traditional cloud providers, there will be room in the market for many distributed cloud providers. Our particular vision is to create a new eco-system of companies offering storage, computer farms, data providers, web hosting, SaaS applications and more, but all doing business with one another through distributed cloud. We do believe in a future of decentralised infrastructure and market network, where Big Data and HPC applications, highly valued data-sets, and computing resources (storage, CPU, GPU, etc.) will be monetized on the blockchain with the highest level of transparency, resiliency and security. We are aiming to be first to market and think we can achieve this because we have been building distributed computing infrastructures for over 15 years. We know what we need to do.

FL: Many believe that fog computing is a huge leap toward the internet of things. How will your platform facilitate the arrival of this new era?

Gilles Fedak: The idea behind Fog/Edge computing is to reach a sufficient number of storage and computing resources most often distributed along the backbone network infrastructure. Therefore Fog/Edge is often associated with telco providers and/or network operators like Huawei for instance. At the moment, I can’t think of a Fog/Edge solution that would span over several different providers. Because the network is by definition multi-providers, we can easily imagine that it will enable new Fog/Edge solutions that go beyond the state of the art — think of mobility issue for instance.

FL: What happens to centralized cloud computing services if your model proves to be successful? Can you think of a scenario where they remain relevant?

Gilles Fedak: Due to our previous experience, we believe more in an hybrid model where centralized Cloud services still exist and where more and more services migrate to the distributed Cloud. An interesting trend is the fragmentation of Web applications. It started with bare-metal web hosting, then virtualization, now containers, microservices and in the future maybe unikernels. This trend gives us a lot of opportunities because finer grain of computation is easier to distribute. We predict that centralised cloud services will also need to adapt to survive. There will still be a market for some time for centralised cloud services of course but we envision a rapid growth of the decentralized Cloud pushed by the emergence of blockchain technology and the requirements of distributed applications : ambient AI, distributed deep learning, IoT, parallel stream processing etc…

FL: Now to speak of the future, do you think demand for computing power might outpace the technological advancement or vice versa? Will cloud and fog computing ever be required for routine tasks? Or will budget desktops become powerful enough to render cloud computing obsolete? What sort of new projects may emerge that will challenge supercomputers of the future?

Gilles Fedak: The future challenges rely less in the availability of massive raw computing power than in the capacity of processing the deluge of data generated by countless number of devices spread all over the cities. Question is: what kind of infrastructure do we need to store, process, analyze and monetize these data ? Of course the aggregated computing power might look on par with a nowaday supercomputer — nexgen will be exaflops supercomputers. But such a network has to be imagined and designed in a totally different way.

To give some perspective to this interview, I would like to end it with a description of the Blob computing model, which was invented in the 80’s at the Palo-Alto Rank Xerox lab.

We described a computational model based upon the classic science-fiction film, The Blob: a program that started out running in one machine, but as its appetite for computing cycles grew, it could reach out, find unused machines, and grow to encompass those resources. In the middle of the night, such a program could mobilize hundreds of machines in one building; in the morning, as users reclaimed their machines, the “blob” would have to retreat in an orderly manner, gathering up the intermediate results of its computation. (This affinity for night-time exploration led one researcher to describe these as “vampire programs.”)

(John F. Shoch and Jon A. Hupp, 1982)