Artificial Intelligence

The Role of Artificial Intelligence Hardware in Data Centres

Artificial Intelligence

As the world grapples handling a tsunami of data, data centres are also evolving fast. The rapid growth of smart connected devices and a massive rise in consumption of data is placing an enormous amount of pressure on the underlying data centre infrastructure. Data centres have become so complicated that it is no longer possible for only human beings to manage this surging complexity, without affecting performance and efficiency levels. Disruptive technologies like artificial intelligence (AI) hardware in data centres can help improve the efficiency of data operation in a significant manner.

As the market for data centre hardware is undergoing major disruption, NVIDIA plans to acquire ARM with new processors and servers providing many options for users yearning for more power to tame AI workloads. It includes new offerings from resurgent players and a herd of start-up offering specialised chips for AI computing.

The growth of specialized computing has the potential to bring change in the data centre industry that must adapt to new form factors and higher rack densities. But for all the excitement in the chip and server sector, the racks and rows of most data halls continue to be populated by Intel chips, especially in the enterprise sector. And these chips by NVIDIA and Intel can bring more high-density AI hardware in the data centres.

With the arrival of more powerful new chips from several start-ups, we are going to witness a “Cambrian Explosion” of new chips optimised for AI data-crunching, anticipates analyst Karl Freund of Moor Insights. He further elaborates that these chips might take longer to develop than anyone wants, however, there’s no way faster chips can come close to keeping up with the growth in models.

READ  Artificial Intelligence to Monitor Workers’ Movements - Workers Compensation Legal Blogs Posted by Larry Pitt


The Increase in Domain-Specific Architectures

The development of AI algorithms is accelerating with new models integrating billions of data points to make recommendations and decisions. Since new models integrate more data, they also require more computing horsepower driving an AI hardware arms race. The competition to leverage AI is led by the industry’s marquee names include Amazon, Facebook, Google and Microsoft to add intelligence to a wide range of services and applications.

According to David Patterson, a distinguished engineer at Google, the domain specific architecture is the future of computing and one can tailor one’s work to that domain and ignore other domains.

For instance, Google’s Tensor Processing units (TPUs), a specialised chip architecture that dramatically boosted Google’s processing power. The TPU is a custom application specific integrated circuit (ASIC) tailored for TensorFlow, an open source software library for machine learning (ML) that was developed by Google. ASIC is a chip that can be customised be performing a specific task. Welcoming a domain-specific approach allowed Google to drop general purpose features to save on space and energy in the processor. Most importantly, it also allowed Google to deploy massive processing power in a smaller footprint.

Share This Article

Do the sharing thingy

About Author

More info about author


Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.