Is Cost-Effective Deep Reinforcement Learning Possible?

AI Veille cyber

“Is there scientific value in conducting empirical research in reinforcement learning when restricting oneself to small- to mid-scale environments?”

Can a research done on a smaller computational budget can provide valuable scientific insights? Given the insane training times and budgets, it is natural to wonder if anything worthwhile in AI comes at a small price. So far, the researchers have focused on the training costs of language models which have become too large. But, what about the deep reinforcement learning(RL) algorithms -the brains behind autonomous cars, warehouse robots and even the AI that beat chess grandmasters?

Deep RL combines RL with deep learning. Deep RL made a splash back in 2015 when Alphabet’s DeepMind released their work on Deep Q Networks (DQN). When tested on Atari 2600 games, the DQN agent surpassed the performance of all previous algorithms and achieved a level comparable to a professional human games tester.

However, according to Google researchers, the advancement of deep RL comes at a cost— a computational one. The original DQN algorithm was tweaked over the years to beat the arcade learning(ALE) benchmark. ALE is widely used as an interface for benchmarking deepRL models on Atari games. The Rainbow algorithm is one such improvement which helped the DQN paradigm to attain state of the art status. However, the Rainbow algorithm is heavy on the computational front.

Rainbow was first introduced in 2018. The experiments reportedly required a large research lab set up as it took roughly 5 days to fully train using specialised hardware like the NVIDIA Tesla P100 GPU. According to Google researchers, to prove Rainbow’s superiority, it required approximately 34,200 GPU hours (or 1425 days). Moreover, this cost does not include the hyper-parameter tuning that was necessary to optimise the various components. “Considering that the cost of a Tesla P100 GPU is around $6,000, providing this evidence will take an unreasonably long time as it is prohibitively expensive to have multiple GPUs in a typical academic lab so they can be used in parallel,” according to Google researchers.

In their work titled, “Revisiting Rainbow”, the researchers at Google tried to answer the following questions:

  • Would state of the art (ALE) have been possible with smaller-scale experiments unlike in the case of Rainbow back in 2018?
  • How good are these algorithms in non-ALE environments?
  • Is there scientific value in conducting empirical research in reinforcement learning when restricting oneself to small- to mid-scale environments?

Source : https://analyticsindiamag.com/cost-effective-deep-reinforcement-learning/