I Introduction
Reinforcement Learning is a machine learning paradigm, where a robotic agent learns the optimal policy for performing a sequential decision making without complete knowledge of the environment. Recent successes in deep reinforcement learning have enabled RL agents to solve complex problems from balancing inverted pendulums
[1] to playing Atari games [2]. Despite these recent successes, we do not yet understand how to efficiently transfer the learned policies from one task to another [3]. In particular, while some level of success has been achieved, in transferring RL policies in the same statespace domains, the problem of efficient crossdomain skill transfer is still quite open.We consider the term “similar” source and target tasks, in the sense that they exploit the same underlying physical principles, but their state spaces can be entirely different. For example in our primary results, we consider the problem of knowledge transfer from balancing CartPole to Bicycle balancing. While both the system share common dimensionality of state and action spaces but span across a different coordinate frame. The Cartpole is defined over states of cart and pendulum, and the action space is lateral force on cart . Whereas bicycle dynamics is modelled over handlebar rotation and bicycle roll angle with action space is torque applied by rider on the handlebar . While the dynamics and domain of state space of two processes might be completely different, they share a commonality in the underlying physical principles. Both the systems exhibit nonminimum phase dynamics and also the nature of the control policy is same, such that the control action is applied, in the direction of the fall of pendulum or bicycle. This similarity in dynamical behavior of two systems makes learning in a cartpole domain relevant in bike balancing problem.
Another crossdomain transfer results we present, is the problem of transferring the skills learned from the Mountain Car (Figure 0(a)) [1] to the Inverted pendulum (Figure 0(b)). In the Mountain Car, the agent learns the optimal policy to make an underpowered car climb a mountain. On the other hand, in the pendulum domain, the agent learns to balance a pendulum upright from its initial down position. In both cases, the common physical principle the agent must learn, is to exploit the principle of energy exchange. The pendulum must be swung up to the upright position by creating enough angular momentum through smaller oscillations, and similarly, the car must be made to climb a steeper slope by using energy exchanged by moving up and down the slope of the mountain. In principle, a good RL agent would find it easier to balance a pendulum after it has learned a related task, to make an underpowered car climb a mountain.
Humans are capable of efficiently and quickly generalizing the learned skills between such related tasks. However, RL algorithms capable of performing efficient transfer of policies without learning in the new domain have not yet been reported.
To address this gap, our main contribution is an algorithm that enables crossdomain transfer in the RL setting. Leveraging notions from apprenticeship learning [5] and adaptive control [6, 7], we propose an algorithm that can directly transfer the learned policy from a source to target task.
Given a source task and it’s optimal policy, a target apprentice model and an intertask mapping, we show that it suffices only to execute greedy policies augmented with an adaptive policy to ensure optimal behavior in target space.
Ia State of the Art: Transfer Learning in RL
A significant body of literature in transfer learning in the RL, are focused on using the learned source policy as an initial policy in the target task [8, 9, 10]. Examples include a transfer in scenarios where the source and target task are similar, and no mapping of state space is needed [11], or transfer from human demonstrations [12]. However, when the source and target task have different stateaction spaces, the policy from source cannot be directly used in the target task. In this case, a mapping is required between the stateaction space of the corresponding source and target tasks to enable knowledge transfer [3]. The intertask mapping can be supervised; provided by an agent [13], hand coded using semantics of state features [14, 11, 15] ,unsupervised using Manifold Alignment [16, 10] or Sparse Coding Algorithm [9]. Aforementioned TL methods accelerate the learning and minimize regret as compared to stand alone RL on target. However, with intertask mapping and simple initializing of the target task learning with the transferred source policy may not lead to sample efficiency in the transfer. In particular, these approaches do not leverage the fact that both tasks exploit the same physical principle and the possibility of reusing source policy in the target domain.
IB Main Contributions
In this paper, we take a different approach from using the source policy to initialize RL in the target. Inspired by the literature in model reference adaptive control (MRAC) [7] we propose an algorithm that adapts the source policy to the target task. Also, unlike MRAC literature we can extend the method to probabilistic MDPs with discrete stateaction spaces. We argue that optimal policies retain its optimality across domains that leverage the same physical principle but different statespaces. We augment the transferred policy by a policy adjustment term that adapts to the difference between the dynamics of two tasks in target space. If adaptive policy could be design to match the target model to projected source model, we demonstrate the adapted projected policy to be optimal in target task. The adaptive policy termed as is designed to accommodate the difference between the transition dynamics of the projected source and the target task. The key benefit of this method is that it obviates the need to learn a new policy in the target space, leading to high sample efficient transfer.
Ii Transfer Learning with Target Apprentice (TATL)
This section proposes a novel transfer learning algorithm capable of crossdomain transfer between two related but dissimilar tasks. The presented architecture applies to both continuous, discrete state and action space systems. Unlike the available state of art TL algorithms, which mainly concentrates on policy initialization in target task RL; we propose to use source policy directly as the optimal policy in the related target model. We achieve this one step transfer through online correction of transferred policy with adaptive policy, derived based on model transition error. The presented approach has three distinct phases: Phase I involves finding an optimal policy for source task. For this purpose, we use Fitted QIteration (FQI) RL to solve the source MDP. Since the source task is much simpler and smaller problem compared to the target tasks, we assume we can always discover an optimal policy for source task. Phase II; involves discovering a mutual mapping between state, action space of source and target using Unsupervised Manifold Alignment (UMA). Phase III of transfer, is the adaptation of the mapped source optimal policy through policy augmentation in a new target domain.
A possible drawback of the proposed method could be suboptimal transfer, if the adaptive policy fails to account for the total model error between the projected source and target model. For given any small “” residual model error by adaptive policy, results in the suboptimal behavior in target task. With the advantage of high sample efficiency of the proposed technique, nearoptimal behavior in the target can be acceptable. It is to be noted; we do not engage in exploration in target space for transfer, we only exploit the projected source policy in target space, to achieve nearoptimal behavior. Nevertheless, with further exploration, we can improve upon the adapted transferred policy and achieve an optimal solution, but this is left for followon work.
Details of three phases of proposed transfer learning technique using target apprentice model are as follows:
Iia Phase I: Learning in the Source task
Fitted QIteration is used to learn optimal policy in the source task. The policy search is not limited to Qlearning and can be extended to any other optimal policy generation methods. A single layer shallow network is used to approximate Q function. For the tasks considered in this paper, shallow networks were found sufficient, but for more complex tasks a deep architecture with multiple layers can be used. This exercise is underway and left for followon work.
IiB Phase II: Inter task Mapping
Transfer in RL setting, the source and target task have a different representation of state and action spaces. The crossdomain transfer requires an intertask mapping to facilitate a meaningful transfer. State space and belonging to two different manifold, cannot be directly compared. Unsupervised Manifold Alignment (UMA) technique helps to discover alignment between two data sets and provide a one to one and onto intertask mapping. Using this intertask mapping, allows us to build one step optimal policy for target task. It is important to note we consider same cardinality and analogous action spaces in analysis and experiments for ease of exposition of the proposed transfer architecture. Problems with distinct, nonuniform action spaces will have to use classification methods to find correspondence between action spaces [17, 18]. The transfer is achieved through augmenting transferred policy by the adaptive policy learned over the target model. The proposed policy transfer and adaptation method reuse the source policy in target space resulting in nearoptimal behavior. Details of the interstate mapping are provided in [10, 16] and reference therein.
IiC Phase III: Transfer Learning through policy adaptation
This section presents a transfer algorithm for a pair of tasks in continuous/discrete state and action spaces. Algorithm 1 details TL through policy adaptation using apprentice model. Empirically we show presented method is sample efficient compared to other methods of transfer; since the sample complexity of learning an optimal policy for initialized target task is reduced to sample complexity of local apprentice model learning. Algorithm1 leverages the intertask mapping detailed in subsection IIB, to move back and forth between source and target space for knowledge transfer and adaptive policy learning. The performance of the policy transfer depends on the quality of manifold alignment between source and target tasks. We assume UMA provides a one to one and onto correspondence between source and target state spaces for efficient transfer. Algorithm1, provides pseudocode for TL using target apprentice. Steps provide an architecture for crossdomain policy transfer and step details policy adaptation through target apprentice learning.
Iii Markov Decision Process
We assume the underlying problem is defined as Markov Decision Process (MDP). An MDP is defined as a tuple
, where is a finite set of states; set of actions.is a Markovian state transition model, the probability of making transition to
upon taking action in state . is solution horizon of MDP, so that MDP terminates after steps. is distribution over which initial states are chosen and is reward function measuring the performance of agent and is assumed to be bounded by . Total return for all states is defined as sum of discounted reward , being the discount factor. A policy is a mapping from statesto a probability distribution over set of actions in
. The agent’s goal is to find a policy which maximize the total return.We formalize the underlying transfer problem by considering a source and target MDP , , with its own state space, action space and transition model respectively. In general the state space can be completely different in two domains. Regarding the action space of two domains we assume,
Assumption 1: The cardinality of the discrete action space is same in source and target task
(1) 
but the limits on action amplitude can be different
(2) 
We assume an invertible mapping provides correspondence between the two state space of source and target model. We will use , to denote the corresponding projected states at time from target and source spaces respectively.
The transition probabilities also differ. However, we assume the physics of the problem share some similarities in the underlying principles.
Assumption 2: The transition model for the source task is available or that we can sample from a source model simulator. This assumption is not very restrictive since the designer can always select/create related source task for given target task.
(3) 
The target transition probabilities is modeled online using stateactionstate triplets collected along the trajectories generated by some random exploration policy. We call the approximate model as the apprentice to target .
Iiia Algorithm: TATL
For every initial condition in target task ; are mapped to source space to find the corresponding initial condition of source task.
(4) 
where is the inverse mapping from target to source and represents the image of in source state space. For the mapped state in source task, a greedy action is selected using learned stateaction value function.
(5) 
Using selected action the source model at state is propagated to . The propagated state in source task is mapped back to the target space using intertask mapping function,
(6) 
where is the image of in target space. From Assumption1, every selected action in source task has greedy correspondence in target task. Using this equivalence of actions, for every selected action in source task an equivalent action in target task is selected as . The selected action for target task is augmented with derived from adaptive policy,
(7)  
(8) 
where is apprentice model and
(9) 
is the projected source model on to target space. The set is adaptive action space such that and
The total transferred policy for solving a related target task is proposed to be a linear combination of mapped optimal policy and an adaptive policy as follows,
(10) 
IiiB Analysis
Theorem 1
For any given small , there exists a such that the difference between true target model and target apprentice model over the entire stateaction space be
Then using , the optimal policy for source task, the modified policy (10) can be shown to be optimal in the target task
We analyze the admissibility of the augmented policy for the target space. Target model is assumed to be any nonlinear, control affine system and source model be any nonlinear system. The discrete transition model for both source and target model can be considered as follows,
(11)  
(12) 
where and .
The target apprentice is an approximation to the target model. We retain the control affine property of the target model by using appropriate basis of the single layer neural network, to model the target dynamics. The approximate or the apprentice model of the target can be written as function of network weights and basis as,
(13)  
(14)  
(15) 
where and , be target apprentice network weights and basis function.
Sampling the action from modified target optimal policy (10) and applying it to target model following holds,
(16) 
where is the mapped optimal action to target space corresponding to source optimal policy and is modification term to mapped optimal action to cancel the effects of model error.
From definition of model adaptive policy (8) and apprentice model (15), above expression can be simplified as
(17)  
For choice of policy mixture coefficient . The above expression Simplifies to,
(18) 
Where , and for persistently exciting data collected for apprentice model learning convergence of the parameters to true value can be shown [19], ensuring
Using (9) and (12) the above expression (18), can be rewritten in terms of source transition model using the intertask mapping function as
(19) 
where
Expression (19) demonstrates that implementation of the modified optimal policy (10) in target task is equivalent to projecting the source optimal trajectories on to the target space. Assuming existence of unique correspondence between source to target task space, we prove the policy (10) leads to optimal solution in target model.
Iv Target Task Apprentice Learning
Target apprentice is an approximate model for target task. In this paper we consider the target model apprenticeship learning using any random policy which explores randomly the target domain [5]. We reuse the dataset, stateactionstate triplets generated through random policy for manifold alignment, for target apprentice learning. This data reuse leads to further saving of time and processing for sample generation for apprentice learning.
Iva Apprentice Learning: Algorithm
Using any random policy, we explore the target model to collect stateactionstate triplets to learn the target apprentice to enable transfer learning using apprentice model.

Run trials in target task under the random policy for steps. Save the state trajectories experienced.

Using accumulated data of stateactionstate triplets, estimate the system dynamics using least square linear regression for linearly parametrized model and store the system parameters
. 
Evaluate the utility of the projected policy in target model on both true and approximate system, and . Utility function is defined as average reward accumulated for trials.

If , return , where is some chosen small threshold.
V Experiments & Results
We present results from five experiments to evaluate the proposed transfer learning framework. The first two experiments consider transfer learning in same domains but with different transition models and action spaces. The first problem is in discrete state and action space, while the second problem is of continuous state space and nonstationary transition model in the target task. The third and fourth experiment focuses on crossdomain transfer where the policy from the cartpole, mountain car is transferred to the bicycle problem, inverted pendulum domains respectively. We also demonstrate the presented approach being robust to negative transfer through our final experiment. We compare the presented Target Apprentice TL (TATL) against existing state of the art Transfer in RL, Unsupervised Manifold Alignment (UMATL) [10] and no transfer RL (Fitted QLearning).
Va SameDomain Transfer
We learn the optimal policy in the source task using FQI. In each problem, distinction in the environment/system parameters makes the source and target tasks different. The target and source domains have the same statespace but different transition models and action spaces. We also do not need target reward model be similar to source task, as the proposed algorithm directly adapts the policy from the source task and does not engage in RL in the target domain.
VA1 Grid World to Windy Grid World
The source task in this experiment is NonWindy (NW) grid world. The state variables describing the system are grid positions. The RL objective is to navigate an agent through obstacles from start to goal position optimally. The admissible actions are up , down , right and left . The reward function is for reaching goal position, for hitting obstacles and everywhere else. The target domain is same as the source but with the added wind which affects the transition model in parts of the statespace (see Figure 1(b)).
The optimal policy in source task (nonwindy grid world) is learned using QIteration. We do not need any intertask mapping as the source, and target state spaces are identical. We start with randomly sampled starting position and execute policy in the target domain and collect samples for apprentice model learning. Empirically, we show the proposed method (TATL) provides a sample efficient TL algorithm compared to other transfer techniques. Figure 1(a) and 1(b) shows the results of same domain transfer in the grid world, demonstrating TATL achieving successful transfer in navigating through the grid with obstacles and wind bias. Figure 1(c) and 1(d) shows the quality of transfer through faster convergence to average maximum reward with lesser training samples compared to UMATL and RL methods. The presented algorithm can attain maximum average reward in reaching goal position in steps. UMATL and RL algorithm achieve similar performance in and steps respectively, nearly one order higher compared to proposed TATL.
VA2 Inverted Pendulum (IP) to timevarying IP
We demonstrate our approach for a continuous state domain, Inverted Pendulum (IP) swingup and balance. The source task is the conventional IP domain [1]. The target task is a nonstationary inverted pendulum, where the length and mass of the pendulum are continuously time varying with function and , where , and . The state variables describing the system are angle and angular velocity where . The RL objective is to swingup and balance the pendulum upright such that . The reward function is selected as , which yields maximum value at upright position and minimum at the downmost position. The action space is: full throttle right , full throttle left and zero throttle. Note that the domain is tricky, since full throttle action is assumed to not generate enough torque to be able to lift the pendulum to the upright position, hence, the agent must learn to swing the pendulum so that it generates oscillations and leverages angular momentum to go to the upright position. The target task differs from the source task in the transition model.
The source task use FQI learning with single layer Radial Basis Functions (RBF) network. The Q function is modeled as linear combination of weights and basis as
. We use RBF bases “” for value function approximation with bandwidth and centers spanning space with network learning rate of for FQI iterations.VB CrossDomain Transfer
Next, we consider an even more challenging transfer setting: crossdomain transfer. The problem setup is similar to same domain transfer with the notable distinction being the state spaces are different for the source and target tasks.
VB1 CartPole to Bicycle
Our main result is the task where an agent learns to ride the bicycle. We consider the problem of learning to balance; we do not concentrate on the navigation problem to some goal position. Since RL for navigation is more of trajectory optimization problem, i.e., the agent learn to focus on maneuvering towards the target once it has learned to balance the bicycle upright. Balancing is a more interesting problem when the bicycle is below critical velocity. Usually critically velocity , above which bicycle is self stabilizing is approximately to . We are trying to learn to balance a unstable bicycle with forward velocity i.e. . We also simulate the imperfect balance by inducing random noise in the CG displacement of rider, by up to from zero position.
At every time step agent receives information about the state of the bicycle, the angle and angular velocity of the handlebar and the bike from vertical respectively. For the given state the agent is in, it chooses an action of applying torque to handlebar, trying to keep bike upright. The details of bicycle dynamics are beyond the scope of this paper, interested readers are referred to [20, 21] and references therein.
We use the CartPole as source task for learning to balance bicycle. The bicycle balance problem is not so different from the cart pole: in both the cases, the objective is to keep the unstable system upright. The objective of balance is achieved in both the systems by moving in the direction of fall. However, the control in the cart pole affects more directly the angle of the pole, i.e., move the cart such that it is always under the pole. In the bicycle, the control is to move the handlebar in the direction of fall. However, balancing the bike is not so simple, to turn the bike under itself, one must first steer in the other direction, this is called counter steering [21]. We observe that both cart pole and bicycle has this commonality in dynamical behavior, as both the system have a nonminimum phase that is the presence of unstable zero. They tend to move initially in the direction opposite to the applied control. This similarity qualifies the cartpole system as an appropriate choice of source model for bicycle balance task.
Cart pole is characterized by state vector
, i.e., position, the velocity of cart and angle, angular velocity of the pendulum. The action space is the force applied to cart . Crossdomain transfer requires a correspondence between intertask space manifold for mapping the learned policy and source transition model from source to target space and back. We use UMA to discover the correspondence between state space of bicycle and cart pole model. We use FQI to solve for optimal policy in the source, cart pole model. A linear network with cart pole states is used as basis vector to approximate the action value function, with network learning rate .Figure 3(a) shows the average reward accumulated by TATL, UMATL and RL (no transfer) in learning to balance the bicycle. In the typical learning process, TATL out performs other transfer method and converges to maximum average reward in 1700 episodes. Each episode is a simulation run until the policy can balance bicycle without toppling; the episode ends if the bicycle falls or max time of 1000s is reached. Figure 3(b) shows total time the bicycle was balanced upright by each method. The bike balancing time for TATL methods is highest and is times more compared to UMATL method.
VB2 Mountain Car (MC) to Inverted Pendulum (IP)
We have tested the crossdomain transfer between mountain car to an inverted pendulum. The source and target task are characterized by different state and action space. The source task MC is a benchmark RL problem of driving an underpowered car up a hill. The dynamics of MC are described by two continues state variables where and . The input action takes three distinct values full throttle right, full throttle left and no throttle. The reward function is proportional to negative of the square distance of the car from goal position. The target task is conventional IP as described in the previous experiment.
We utilize UMA to obtain this mapping as described in Section IIB. We do not report the training time to learn the intertask mapping since it is common to both TATL and UMATL methods. We used a random policy to generate samples for manifold alignment and for target apprentice learning. The source task uses FQI learning with single layer RBF network for optimal policy generation. The source Qfunction is modeled using function approximation as using RBF as bases “” with bandwidth and centers spanning space and with network learning rate of . For all above results the training length involved with TATL method in Figure 2(d), 2(b), and 1(d) is sample lengths for target apprentice learning. We compare TATL with UMATL and genericRL on target task. We examine the efficiency and effectiveness of transfer methods based on sample efficiency in learning the target task and speed of convergence to maximum average reward. Similar to same domain transfer Figure 2(c) and 2(d) shows the quality of transfer for TATL through faster convergence to average maximum reward with lesser training samples compared to UMATL and RL methods.
VC Negative transfer
In our last result, we demonstrate that the proposed transfer is robust to negative transfers. Given a target model, the effectiveness of transfer depends on the relevance of the source task to the target task. If the relationship is strong, the transfer method can take advantage of it, significantly improving the performance of the target task through transfer. However, if the source and target are not sufficiently related or the features of source task do not correspond to the target, the transfer may not improve or even decrease the performance in target task leading to negative transfer.
We show that the UMATL suffers from a negative transfer in this results, where as the performance of presented TATL is much superior compared to UMATL and RL(no transfer). We demonstrate this through an inverted pendulum upright balance task. We use inverted pendulum model as both source and target systems. The target is different from source model in the sign of the control action. With exactly same dynamics in both source and the target model, but with the sign flipped of the control effective term in the target, we observed that an initialized target task learning (UMATL) suffers with negative transfer. RL is indifferent to sign change as it learns policy from scratch. Whereas for the TATL method, since we learn the apprentice model to the target, we learn the sign associated with action as well. Thereby the policy modification term sign is flipped accordingly, and same policy transfer performance is achieved irrespective of the control sign change.
Figure 3(c) and 3(d) shows the quality of transfer through faster convergence to average maximum reward with lesser training samples for proposed TATL method compared to UMATL and RL methods. It is to be observed that UMATL method converges to much lower average reward and gets stuck in a local minima and never achieves the upright balance of pendulum. Also, the samples observed by UMATL in learning the task is much higher compared to no transfer (RL) and proposed TATL methods.
Vi Conclusions
We introduced a new Transfer Learning technique in RL, which leads to sample efficient transfer between source and target tasks. The presented approach demonstrates the nearoptimality of the transferred policy in target domain by augmenting it with an adaptive policy; which accounts for the model error between target and projected source. The sample complexity of the transfer is reduced to target apprentice learning, which we demonstrated empirically, leads to more than one order improvement in training lengths over existing approaches.
References
 [1] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
 [2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
 [3] Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633–1685, 2009.

[4]
Alma Rahat.
https://bitbucket.org/arahat/matlabimplementation
ofcontrollingabicycleusing, 2017. [Online; accessed 9142017].  [5] Pieter Abbeel and Andrew Y Ng. Exploration and apprenticeship learning in reinforcement learning. In Proceedings of the 22nd international conference on Machine learning, pages 1–8. ACM, 2005.
 [6] Karl J Åström and Björn Wittenmark. Adaptive control. Courier Corporation, 2013.
 [7] Girish Chowdhary, Tongbin Wu, Mark Cutler, and Jonathan P How. Rapid transfer of controllers between uavs using learningbased adaptive control. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages 5409–5416. IEEE, 2013.

[8]
Matthew E Taylor, Peter Stone, and Yaxin Liu.
Value functions for rlbased behavior transfer: A comparative study.
In
Proceedings of the National Conference on Artificial Intelligence
, volume 20, page 880. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.  [9] Haitham B Ammar, Karl Tuyls, Matthew E Taylor, Kurt Driessens, and Gerhard Weiss. Reinforcement learning transfer via sparse coding. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent SystemsVolume 1, pages 383–390. International Foundation for Autonomous Agents and Multiagent Systems, 2012.
 [10] Haitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew E Taylor. Unsupervised crossdomain transfer in policy gradient reinforcement learning via manifold alignment. In Proc. of AAAI, 2015.
 [11] Bikramjit Banerjee and Peter Stone. General game learning using knowledge transfer. In IJCAI, pages 672–677, 2007.
 [12] Jan Peters and Stefan Schaal. Policy gradient methods for robotics. In Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on, pages 2219–2225. IEEE, 2006.

[13]
Lisa Torrey, Jude Shavlik, Trevor Walker, and Richard Maclin.
Relational macros for transfer in reinforcement learning.
In
International Conference on Inductive Logic Programming
, pages 254–268. Springer, 2007.  [14] Yaxin Liu and Peter Stone. Valuefunctionbased transfer for reinforcement learning using structure mapping. In Proceedings of the national conference on artificial intelligence, volume 21, page 415. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006.
 [15] George Konidaris and Andrew G Barto. Building portable options: Skill transfer in reinforcement learning. In IJCAI, volume 7, pages 895–900, 2007.
 [16] Chang Wang and Sridhar Mahadevan. Manifold alignment without correspondence. In IJCAI, volume 2, page 3, 2009.
 [17] Matthew E Taylor and Peter Stone. Crossdomain transfer for reinforcement learning. In Proceedings of the 24th international conference on Machine learning, pages 879–886. ACM, 2007.
 [18] Matthew E Taylor, Peter Stone, and Yaxin Liu. Transfer learning via intertask mappings for temporal difference learning. Journal of Machine Learning Research, 8(Sep):2125–2167, 2007.
 [19] Yanjun Liu and Feng Ding. Convergence properties of the least squares estimation algorithm for multivariable systems. Applied Mathematical Modelling, 37(1):476–483, 2013.
 [20] Jette Randlov and Preben Alstrom. Learning to drive a bicycle using reinforcement learning and shaping. In Proceedings of the Fifteenth International Conference on Machine Learning, pages 463–471, 1998.
 [21] Karl J Åström, Richard E Klein, and Anders Lennartsson. Bicycle dynamics and control. IEEE Control Systems Magazine, 25(4):26–47, 2005.
Comments
There are no comments yet.