先进制造业知识服务平台
国家科技图书文献中心机械分馆  工信部产业技术基础公共服务平台  国家中小企业公共服务示范平台

Digital twin-driven deep reinforcement learning for adaptive task allocation in robotic construction
参考中译:数字双驱动深度强化学习在机器人施工中的自适应任务分配


          

刊名:Advanced engineering informatics
作者:Dongmin Lee(School of Architecture & Building Science, Chung-Ang University)
SangHyun Lee(Tishman Construction Management Program, Dept. of Civil and Environmental Engineering, Univ. of Michigan)
Neda Masoud(Dept. of Civil and Environmental Engineering, Univ. of Michigan)
M. S. Krishnan(Computer Information Systems, Technology and Operations at the Ross School of Business, Univ. of Michigan)
Victor C. Li(Dept. of Civil and Environmental Engineering, Univ. of Michigan)
刊号:738C0037
ISSN:1474-0346
出版年:2022
年卷期:2022, vol.53
页码:101710-1--101710-12
总页数:12
分类号:TP18; TP3
关键词:Digital twinProximal policy optimization (PPO)Deep reinforcement learning (DRL)Autonomous robotAdaptive task allocation
参考中译:数字双胞胎;近邻策略优化;深度强化学习;自主机器人;自适应任务分配
语种:eng
文摘:In order to accomplish diverse tasks successfully in a dynamic (i.e., changing over time) construction environment, robots should be able to prioritize assigned tasks to optimize their performance in a given state. Recently, a deep reinforcement learning (DRL) approach has shown potential for addressing such adaptive task allocation. It remains unanswered, however, whether or not DRL can address adaptive task allocation problems in dynamic robotic construction environments. In this paper, we developed and tested a digital twin-driven DRL learning method to explore the potential of DRL for adaptive task allocation in robotic construction environments. Specifically, the digital twin synthesizes sensory data from physical assets and is used to simulate a variety of dynamic robotic construction site conditions within which a DRL agent can interact. As a result, the agent can learn an adaptive task allocation strategy that increases project performance. We tested this method with a case project in which a virtual robotic construction project (i.e., interlocking concrete bricks are delivered and assembled by robots) was digitally twinned for DRL training and testing. Results indicated that the DRL model's task allocation approach reduced construction time by 36% in three dynamic testing environments when compared to a rule-based imperative model. The proposed DRL learning method promises to be an effective tool for adaptive task allocation in dynamic robotic construction environments. Such an adaptive task allocation method can help construction robots cope with uncertainties and can ultimately improve construction project performance by efficiently prioritizing assigned tasks.
参考中译:为了在动态(即随时间变化)的施工环境中成功完成不同的任务,机器人应该能够对分配的任务进行优先排序,以优化其在给定状态下的性能。最近,深度强化学习(DRL)方法已经显示出解决这种自适应任务分配的潜力。然而,DRL是否能够解决动态机器人施工环境中的自适应任务分配问题仍然没有答案。在本文中,我们开发并测试了一种数字双驱动DRL学习方法,以探索DRL在机器人施工环境中自适应任务分配的潜力。具体地说,数字双胞胎从实物资产合成感官数据,并用于模拟各种动态的机器人建筑工地条件,在这些条件下,DRL代理可以进行交互。因此,代理可以学习提高项目绩效的自适应任务分配策略。我们通过一个案例项目对该方法进行了测试,在该案例项目中,一个虚拟机器人施工项目(即,由机器人运送和组装联锁混凝土砖)被数字化地孪生,用于DRL培训和测试。结果表明,与基于规则的命令式模型相比,DRL模型的任务分配方法在三个动态测试环境中的施工时间减少了36%,为动态机器人施工环境中的自适应任务分配提供了一种有效的工具。这种自适应任务分配方法可以帮助施工机器人应对不确定性,并通过有效地对分配的任务进行优先排序来最终提高施工项目的性能。