Real-time moving object detection is challenging for moving cameras due to the moving background. Many studies use homography matrix to compensate for global motion by warping the background model to the current frame. Then, the pixel difference between the current frame and the background model is used for background subtraction. Moving pixels are extracted by applying adaptive threshold and some post-processing techniques. On the other hand, deep learning-based dense optical flow can be efficient enough to extract the moving pixels, but it increases computational cost. This study proposes a method to enhance a classical background modeling method with deep learning-based dense optical flow. The main contribution of this paper is to propose a fusing algorithm for dense optical flow and background modeling approach. The background modeling methods are error-prone, especially with continuous camera movement, while the optical flow method alone may not always be efficient. Our hybrid method fuses both techniques to improve the detection accuracy. We propose a software architecture to run background modeling and dense optical flow methods in parallel processes. The proposed implementation approach significantly increases the method's working speed, while the proposed fusion and combining strategy improve detection results. The experimental results show that the proposed method can run at high speed and has satisfying performance against the methods in the literature.