Bootstrapping methods are more difficult to combine with FA than are nonbootstrapping methods.
What is function approximation ?
What is the main cause of diverging value function ?
When can the value function diverge ?
The deadly triad (exert from Sutton’s slides NIPS 2015 tutorial)
The risk of divergence arises whenever we combine three things:

Function approximation:
significantly generalizing from large numbers of examples. 
Bootstrapping
learning value estimates from other value estimates, as in dynamic programming and temporaldifference learning. 
Offpolicy learning
learning about a policy from data not due to that policy, as in Qlearning, where we learn about the greedy policy from data with a necessarily more exploratory policy.
Based on the above the following should converge (always?):
 Onpolicy with any form of Bootstrapping such as TD(0)
Off/Onpolicy
 Value Iteration: offpolicy
 Qlearning: offpolicy
 Policy Iteration: onpolicy
 SARSA: onpolicy