You are here

Computer Graphics and Multimedia Lab

Printer-friendly versionSend by email

Head of the laboratory: Yuri Bayakovski, Associate Professor, PhD.

Contact information
Phone number: 
+7 (495) 939-01-90

Graphics & Media Lab was established in 1999. Laboratory possesses a solid research experience in many areas of computer graphics, computer vision, image and video processing. Several courses in the named research areas are taught by members of the Laboratory staff. Undergraduate students are strongly encouraged to participate in research projects and to co-author papers on Russian and international scientific conferences. The research is supported by the Russian Foundation for Fundamental Research, state contracts, and by international companies Intel, Microsoft, Samsung. The Laboratory is one of the main organizers of the annual international conference on computer graphics and vision GraphiCon (http://www.graphicon.ru/en).

Staff members:

  • Dmitry Vatolin, Senior Research Fellow, PhD
  • Alexey Ignatenko, Senior Research Fellow, PhD
  • Ilya Tisevich, Researcher
  • Alla Maslennikova, Engineer
  • Michael Erofeev, Engineer
  • Victoria Ignatova, Engineer
  • Kristina Zipa, Engineer

Regular courses:

Special courses:

Master programs (MSc)

  • Mathematical and computer methods of image processing by Supervisors: Prof. Krylov, Prof. Baykovski, Dr. Ignatenko.

Main scientific directions

The main scientific directions of the Lab are computer graphics, image and video processing and recognition. Several notable research projects are described in the next sections.

Image-based 3D modeling of urban environments

This project is devoted to a highly automated creation of photorealistic buildings’ 3D models from photographs. Algorithms for both a single-view and multi-view 3D reconstruction of urban scenes are developed. Resulting 3D models are composed of a number of vertical walls and a ground plane where the ground-vertical boundary is a continuous polyline. Each model is then refined by recognition and modeling of facade elements from rectified images of building facades. Textures are further cleaned of occluding objects like vegetation and wires using the regular facade structure. This project has been conducted in cooperation with Samsung Advanced Institute of Technology.

A mobile mapping system is a vehicle that is equipped with cameras, laser scanners and positioning systems. While moving along a specified route it gathers georeferenced imagery and a 3D point cloud. Our algorithm for traffic signs detection and recognition in images uses synthetic pictograms for training that allows an easy adaption to traffic signs of any country, to any camera and to various shooting conditions. Our system for automated detection of pavement defects and road marking in images uses an online learning technique – each time the user marks only a few defects, the system interactively trains itself. One of the main research topics is object segmentation and recognition in large 3D point clouds obtained with laser scanners.

Video surveillance

Recognition of video surveillance footage requires detection, tracking and further recognition of people and their behavior. We are developing algorithms for object tracking, face classification and recognition, person re-identification, content-based image retrieval from video archives.

Images of urban and indoor environment often contain text, e.g., names of shops and companies, street addresses, book titles, etc. Text detection and recognition can be used in mobile applications for an automatic translation to other languages, to help visually impaired people; it is also useful for image annotation and content-based retrieval. While text recognition in scanned documents is adopted as a solved problem, any variations of color, font, and background make text recognition in natural images a hard unsolved problem. This project is supported by Microsoft Research.

Digital photomontage for images and video

Image and video matting is one of the main steps in the digital photomontage. The goal of matting is a «soft» segmentation of the selected object from image and its insertion into a new background image. For a high-quality photomontage the partial transparency of object boundaries should be estimated, especially in regions with fur and hair. This project is supported by Microsoft Research.

Interactive photorealistic rendering of gemstones

Photorealistic rendering of gemstones is an important part of modeling the diamond cuts. The goal of rendering algorithms is to render a gemstone model close to the human perception of a stone. Stone material, environment and optical system’s properties should be carefully modeled to achieve this goal.

Shape and optical properties reconstruction from photographs

Building the digital model of a real world object is a complicated task which often requires expensive 3D scanners. The goal of the project is to develop algorithms for a full automatic reconstruction of shape and optical properties of an object from its photographs.

Automatic depth map estimation

Depth map estimation is one of the main steps in the 2D to 3D video conversion, 3D video editing and parallax correction. The depth map is the 2D representation of distances between camera and objects in the scene. The brighter are the areas in the depth map - the nearer are the objects in the scene. Several algorithms were developed for an initial depth map estimation, such as the optical flow based depth estimation from video sequence using a camera and object movement, the single image depth estimation using object boundaries blurriness, the depth map estimation based on scene geometry analysis. In order to achieve more accurate depth maps, new algorithms of depth map filtering and smoothing are developed.

Video segmentation algorithm based on camera and object motion information

Correct and precise video segmentation is a very important task needed in many real-world problems. The foreground/background video segmentation algorithm was developed in our Laboratory. This algorithm is based on the scene motion analysis; it can also estimate the precise foreground object mask even in cases with the moving background.

Video background reconstruction

In many tasks of video editing, one needs the maximum information about the scene background. The reconstructed background can be then used in video correction, object retouching and multiview video construction. The proposed algorithm reconstructs background behind the foreground object using information from the adjacent frames. In case of an insufficient motion or a static scene, certain methods of spatial inpainting are used to restore background in unknown areas.

Recent papers:

  1. O.Barinova and V.Gavrishchaka, Generic regularization of boosting-based optimization for the discovery of regime-independent trading strategies from high-noise time series // ICIC Express Letters., vol. 4, no. 6, pp. 204-208, 2010.
  2. O.Barinova, V.Lempitsky, and P.Kohli, On detection of multiple object instances using Hough transforms // Proc. Intern. Conf. on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE Computer Society Press, pp. 1-8, 2010.
  3. O.Barinova, V.Lempitsky, E.Tretiak, and P.Kohli, Geometric image parsing in man-made environments // Proc. Europ. Conf. on Computer Vision. Lecture Notes in Computer Science. Berlin, Germany: Springer, no. 6312, pp. 57-70, 2010.
  4. R.Shapovalov, A.Velizhev, and O.Barinova, Non-associative Markov networks for 3d point cloud classification // Proc. Photogrammetric Computer Vision and Image Analysis. Paris, France: Societe Francaise de Photogrammettrie et de Teledetection, pp. 1-8, 2010.
  5. R.Shapovalov and A.Velizhev, Cutting-plane training of non-associative Markov network for 3d point cloud segmentation // Proc. of IEEE Intern. Conf. on 3D Imaging, Modeling, Processing, Visualisation and Transmission. Pittsburgh, USA: IEEE Computer Society Press, pp. 1-8, 2011.
  6. V.Kononov, V.Konushin and A.Konushin, People tracking algorithm for human height mounted cameras // Pattern Recognition. 33rd DAGM Symposium. Lecture Notes in Computer Science, Berlin: Springer, no. 6835, pp. 163-172, 2011.
  7. S.Matyunin, D.Vatolin, Y.Berdnikov, and M.Smirnov, Temporal filtering for depth maps generated by kinect depth camera // 2011 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON 2011). Pittsburgh, USA: IEEE Press, pp. 193-197, 2011.

Recent publications:

• 2013

  1. Matrosov M., Ignatenko A.V., Sivovolenko S. Locally adapted detection and correction of unnatural purple colors in images of refractive objects taken by digital still camera // Transactions on Computational Science XIX: Special Issue on Computer Graphics. Lecture Notes in Computer Science. N 7870. Berlin, Germany: Springer, 2013. P. 117-130.
  2. Matyunin S., Vatolin D. 3D video compression using depth map propagation // Multimedia Communications, Services and Security. Communications in Computer and Information Science. N 368. Berlin, Germany: Springer, 2013. P. 153-166.
  3. Menshikova G., Bayakovski Y., Luniakova E., Pestun M., Zakharkin D. Virtual reality technology for the visual perception study // Transactions on Computational Science XIX: Special Issue on Computer Graphics. Lecture Notes in Computer Science. N 7870. Berlin, Germany: Springer, 2013. P. 107-116.
  4. Milyaev S., Barinova O.V. Learning graph laplacian for image segmentation // Transactions on Computational Science XIX: Special Issue on Computer Graphics. Lecture Notes in Computer Science. N 7870. Berlin, Germany: Springer, 2013. P. 92-106.
  5. Voronov A., Vatolin D., Sumin D., Napadovsky V., Borisov A. Methodology for stereoscopic motion-picture quality assessment // Stereoscopic Displays and Applications XXIV. Proc. of SPIE. N 8648. Bellingham, USA: SPIE, 2013. P. 10-24.

• 2012

  1. Barinova O., Lempitsky V., Kohli P. On detection of multiple object instances using Hough transforms // IEEE Trans. on Pattern Analysis and Machine Intelligence. 2012. 34. N 9. P. 1773-1784.
  2. Tretyak E., Barinova O., Kohli P., Lempitsky V. Geometric image parsing in man-made environments // Intern. J. Computer Vision. 2012. 97. N 3. P. 305-321.
  3. Milyaev S., Barinova O. Learning graph Laplacian for image segmentation // ГрафиКон'2012. 22-я Международная конференция по компьютерной графике и зрению. Труды конференции. М.: МАКС Пресс, 2012. P. 95-100.
  4. Barinova O., Milyaev S. Self-tuning semantic image segmentation // CDUD 2012 Workshop co-located with the ICFCA 2012. Leuven, Belgium: Katholieke Universiteit Leuven, 2012. P. 59-66.
  5. Barinova O., Shapovalov R., Sudakov S., Velizhev A. Online random forest for interactive image segmentation // EEML 2012 Workshop co-located with the ICFCA 2012. Leuven, Belgium: Katholieke Universiteit Leuven, 2012. P. 1-8.
  6. Novikova T., Barinova O., Kohli P. Large-lexicon attribute-consistent text recognition in natural images // Computer Vision - ECCV 2012. Lecture Notes in Computer Science. N 7577. Berlin: Springer, 2012. P. 752-765.
Subscribe to Syndicate

Все материалы сайта доступны по лицензии Creative Commons Attribution 4.0 International