Andrey Nikishaev
1 min readNov 14, 2017

--

Good article, but i have few recommendations.
1) You sad that better to use images with size 2^n, but this is not always true. In many cases this lead to bigger image distortions. It’s happening because forward/backward propagation is forming 2^n blocks which form checkerboard artifacts(https://distill.pub/2016/deconv-checkerboard/), and also this make net to generalize bad on non 2^n images. For example if you look at the object detection networks they tried avoid 2^n images there(not only because of this of course).
2) In image processing i would recommend to use Lambda(lambda x: tf.image.resize_images(…)) instead Upscaling2d, this will give much cleaner result. For example this is much used in style transfer networks.

--

--

Andrey Nikishaev

Machine Learning and Computer Vision Researcher. Founder LearnML.Today