Artificial intelligence is a rapidly developing technology today. There have been many remarkable achievements in the past few years including the agent created by DeepMind, which is capable of learning and playing Atari 2600 games from raw image input. This paper tries to reveal the detail behind this algorithm as well as reproduce the algorithm proposed by DeepMind. Moreover, three different novel state-exploration strategies are proposed and explored including adaptive ε-greedy, γ-greedy and replay memory swap. Extensive experimental results show that the proposed y-greedy strategy could outperform the ε-greedy strategy proposed by Google DeepMind using the same platform.