强化学习Q-learning知识点理解



  • ϵ\epsilon-贪心:有1ϵ1-\epsilon的概率选择最优行为,ϵ\epsilon的概率随机选择行为,0<ϵ\epsilon<1
    Q-learning实质:学习当state确定后,采取特定的action所得到的操作奖励,(reward表示状态转换的结果奖励
    Q函数由s、a两个参数组成Q(s,a)Q(s,a),假设存在n种state,m种action,考虑一个n×mn\times{m}的矩阵,对应着n×mn\times{m}个二元对的Q值
    与此同时,存在另一个n×mn\times{m}的矩阵,对应着n种state,m种action组合的reward,这些reward在最早给定
    Q-table(矩阵)的更新策略如下:操作奖励更新为当前奖励与未来奖励的奖励的加权和,状态同步转移
    Q(s,a)(1α)Q(s,a)+α[R(s,a)+γmaxaQ(s,a)]  Q(s,a)\leftarrow(1-\alpha) * Q(s,a)+\alpha*\left[R(s,a)+\gamma*\max_a Q \left(s^\prime, a\right)\right] \
    ss s\leftarrow s^\prime
    可以看出,α1\alpha\rightarrow1时Q值更加注重于未来奖励,反之注重于当前奖励(不更新)
    局限性:若状态空间过大,nn\rightarrow\infty
    代码示例如下:

    # -*- coding:utf-8 -*-
    #|-----|-----|-----|
    #|     |     |     |
    #|  0  |  1  |  2  |
    #|_____|_____|_____|
    #|     |     |     |
    #|  3  |  4  |  5  |
    #|_____|_____|_____|
    # Entering room 0 can get 10 reward, else get -1 reward
    
    import argparse
    import random
    parser = argparse.ArgumentParser()
    parser.add_argument('-g', type=float, default=0.8, dest='gamma')
    parser.add_argument('-s', type=int, default=1000, dest='step_num')
    parser.add_argument('-e', type=float, default=1.0, dest='epsilon')
    args = parser.parse_args()
    
    reward = [[0, -1, 0, -1, 0, 0],
              [10, 0, -1, 0, -1, 0],
              [0, -1, 0, 0, 0, -1],
              [10, 0, 0, 0, -1, 0],
              [0, -1, 0, -1, 0, -1],
              [0, 0, -1, 0, -1, 0]]
    q_table = [[0 for x in range(6)] for y in range(6)]
    step = 0
    while step < args.step_num:
        s = random.randint(1, 5)
        while s != 0:
            if random.random() < args.epsilon:
                action_list = [x for x in range(6) if reward[s][x] != 0]
                s2 = random.choice(action_list)
            else:
                s2 = q_table[s].index(max(q_table[s]))
            q_table[s][s2] = reward[s][s2] + args.gamma * max([q_table[s2][y] for y in range(6) if y != s2])
            s = s2
        step += 1
    print(q_table)
    # 0, 0, 0, 0, 0, 0
    # 10, 0, 4.6 0 4.6 0
    # 0, 7, 0, 0, 0, 2.68
    # 10, 0, 0, 0, 4.6 0
    # 0, 7, 0, 7, 0, 2.68
    # 0, 0, 4.6, 0, 4.6, 0
    

  • 核心层

    技术含量太低,差评



  • 内容不够,代码来凑



  • @haizi 臣附议


 

Copyright © 2018 bbs.dian.org.cn All rights reserved.

Looks like your connection to Dian was lost, please wait while we try to reconnect.