Is Cost-Effective Deep Reinforcement Learning Possible?

“Is there scientific value in conducting empirical research in reinforcement learning when restricting oneself to small- to mid-scale environments?”

Can a research done on a smaller computational budget can provide valuable scientific insights? Given the insane training times and budgets, it is natural to wonder if anything worthwhile in AI comes at a small price. So far, the researchers have focused on the training costs of language models which have become too large. But, what about the deep reinforcement learning(RL) algorithms -the brains behind autonomous cars, warehouse robots and even the AI that beat chess grandmasters?

Deep RL combines RL with deep learning. Deep RL made a splash back in 2015 when Alphabet’s DeepMind released their work on Deep Q Networks (DQN). When tested on Atari 2600 games, the DQN agent surpassed the performance of all previous algorithms and achieved a level comparable to a professional human games tester.

However, according to Google researchers, the advancement of deep RL comes at a cost— a computational one. The original DQN algorithm was tweaked over the years to beat the arcade learning(ALE) benchmark. ALE is widely used as an interface for benchmarking deepRL models on Atari games. The Rainbow algorithm is one such improvement which helped the DQN paradigm to attain state of the art status. However, the Rainbow algorithm is heavy on the computational front.

Rainbow was first introduced in 2018. The experiments reportedly required a large research lab set up as it took roughly 5 days to fully train using specialised hardware like the NVIDIA Tesla P100 GPU. According to Google researchers, to prove Rainbow’s superiority, it required approximately 34,200 GPU hours (or 1425 days). Moreover, this cost does not include the hyper-parameter tuning that was necessary to optimise the various components. “Considering that the cost of a Tesla P100 GPU is around $6,000, providing this evidence will take an unreasonably long time as it is prohibitively expensive to have multiple GPUs in a typical academic lab so they can be used in parallel,” according to Google researchers.

In their work titled, “Revisiting Rainbow”, the researchers at Google tried to answer the following questions:

  • Would state of the art (ALE) have been possible with smaller-scale experiments unlike in the case of Rainbow back in 2018?
  • How good are these algorithms in non-ALE environments?
  • Is there scientific value in conducting empirical research in reinforcement learning when restricting oneself to small- to mid-scale environments?

Source : https://analyticsindiamag.com/cost-effective-deep-reinforcement-learning/

Veille-cyber

Recent Posts

Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières

Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…

2 jours ago

Cybersécurité des transports urbains : 123 incidents traités par l’ANSSI en cinq ans

L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…

2 jours ago

Directive NIS 2 : Comprendre les obligations en cybersécurité pour les entreprises européennes

Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…

4 jours ago

NIS 2 : entre retard politique et pression cybersécuritaire, les entreprises dans le flou

Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…

5 jours ago

Quand l’IA devient l’alliée des hackers : le phishing entre dans une nouvelle ère

L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…

5 jours ago

APT36 frappe l’Inde : des cyberattaques furtives infiltrent chemins de fer et énergie

Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…

5 jours ago

This website uses cookies.