Waleed M. Meleis
Jennifer G. Dy, Javed A. Aslam
Date of Award
Doctor of Philosophy
Department or Academic Unit
College of Engineering. Department of Electrical and Computer Engineering.
computer engineering, cognitive radio network, function approximation, fuzzy logic, reinforcement learning, rough set
Adaptive computing system, Approximation theory
Function approximation can be used to improve the performance of reinforcement learners. Traditional techniques, including Tile Coding and Kanerva Coding, can give poor performance when applied to large-scale problems. In our preliminary work, we show that this poor performance is caused by prototype collisions and uneven prototype visit frequency distributions. We describe our adaptive Kanerva-based function approximation algorithm, based on dynamic prototype allocation and adaptation. We show that probabilistic prototype deletion with prototype splitting can make the distribution of visit frequencies more uniform, and that dynamic prototype allocation and adaptation can reduce prototoype collsisions. This approach can significantly improve the performance of a reinforcement learner.
We then show that fuzzy Kanerva-based function approximation can reduce the similarity between the membership vectors of state-action pairs, giving even better results. We use Maximum Likelihood Estimation to adjust the variances of basis functions and tune the receptive fields of prototypes. This approach completely eliminates prototype collisions, and greatly improve the ability of a Kanerva-based reinforcement learner to solve large-scale problems.
Since the number of prototypes remains hard to select, we describe a more effective approach for adaptively selecting the number of prototypes. Our new rough sets-based Kanerva-based function approximation uses rough sets theory to explain how prototype collisions occur. Our algorithm eliminates unnecessary prototypes by replacing the original prototype set with its reduct, and reduces prototype collisions by splitting equivalence classes with two or more state-action pairs. The approach can adaptively select an effective number of prototypes and greatly improve a Kanerva-based reinforcement learner's ability.
Finally, we apply function approximation techniques to scale up the ability of reinforcement learners to solve a real-world application: spectrum management in cognitive radio networks. We use multi-agent reinforcement learning approach with decentralized control can be used to select transmission parameters and enable efficient assignment of spectrum and transmit powers. However, the requirement of RL-based approaches that an estimated value be stored for every state greatly limits the size and complexity of CR networks that can be solved. We show that function approximation can reduce the memory used for large networks with little loss of performance. We conclude that our spectrum management approach based on reinforcement learning with Kanerva-based function approximation can significantly reduce interference to licensed users, while maintaining a high probability of successful transmissions in a cognitive radio ad hoc network.
Wu, Cheng, "Novel function approximation techniques for large-scale reinforcement learning" (2010). Computer Engineering Dissertations. Paper 8. http://hdl.handle.net/2047/d20000932
Click button above to open, or right-click to save.