We present a scalable object tracking framework, which is capable of tracking the contour of rigid and non-rigid objects in the presence of occlusion. The method adaptively divides the object contour into subcontours, and employs several low-level features such as color edge, color segmentation, motion models, motion segmentation, and shape continuity information in a feedback loop to track each subcontour. We also introduce some novel performance evaluation measures to evaluate the goodness of the segmentation and tracking. The results of these performance measures are utilized in a feedback loop to adjust the weights assigned to each of these low-level features for each sub-contour at each frame. The framework is scalable because it can be adapted to roughly track simple objects in real-time as well as pixel-accurate tracking of more complex objects in off-line mode. The proposed method does not depend on any single motion or shape model, and does not need training. Experimental results demonstrate that the algorithm is able to track the object boundaries accurately under significant occlusion and background clutter.