We address a method of approximate calculation of optimal control policy applicable to a particular class of stochastic control problems, whose stochastic dynamics exhibit a certain convexity preserving property. Problems of this type appear in many applications and encompass important examples arising in the area of optimal stopping and in the framework of control, based on partial observations. Utilizing this specific structure, we suggest a numerical method which enjoys a number of desirable properties. In particular, we work out a remarkably strong method for calculation of the value function: Within our numerically well-tractable approach, we show a convergence to the value function of the original problem uniformly on compact sets. This issue can be of great advantage, particularly for high-dimensional control problem, where the only competing methods from lest-squares Monte-Carlo family are able to serve merely Lp-convergence, under several restrictions. Since the presented algorithm is simple, stable and the procedures are dimension-independent, the author hopes that it can help solving high dimensional control problems when other methods reach their computational limits.