Search

link to homepage

Institute of Bio- and Geosciences

Navigation and service


Quantitative Image Processing

We develop automated imaging and image processing methods for plant phenotyping. Among the systems we developed over the past years are systems addressing plant growth estimation available as ‘GrowMap’ or ‘GrowScreen’ family of methods but also for seed phenotyping and seeding. As a basis for such developments, we investigate image processing and computer vision as noninvasive measurement tools, with a strong underlying and further developed theory, driven and complemented by applications developed in close interdisciplinary cooperation.

For low-level vision tasks, i.e. pixelbased data interpretation, exact modeling of the imaging process, as well as of the observed process is a prerequisite to describe the desired measurement parameters (Schuchert & Scharr 2010) such local modelling approaches include e.g. dynamic light field and transparent motion. Robust estimation of these parameters in addition benefits from statistical modeling of noise and other violations of the physical scene model, e.g. for Diffusion Tensor MRI (Krajsek et al 2016), allowing to parameterize estimation schemes from training data. Using suitable data representations allows not only estimating mean and variance of a desired parameter, but full probability density functions.

When moving from pixels to object-based questions, i.e. mid-level vision, image segmentation is a core task addressed. For example when leaves are in the center of interest opposed to all (green) tissue in a scene, they need to be delineated. We addressed segmentation formulating cost functions consisting of data terms containing the mathematical model of object properties and regularization terms formulating prior knowledge. Segmentation is then usually formulated as optimization process minimizing the cost functional, but may also be performed by other approaches based on such energies or corresponding probabilities, e.g. Gibbs Sampling (Krajsek et al. 2011) a Markov Chain Monte-Carlo method. Systematic errors are minimized, e.g. by the use of optimized model discretizations and the choice of appropriate numerical estimation schemes. Statistical errors due to noise can be reduced by data-driven, nonlinear regularizations. All these terms contain and heavily depend on tuning parameters having a clear statistical motivation and may therefore be inferred from data by suitable machine learning approaches (Krajsek ICCV 2009).

In order to foster computer vision research on plant phenotyping problems, we organized workshops (CVPPP) and challenges (Scharr et al. 2016) and published richly annotated data (Minervini et al. 2016).

leaf_segementationFig. 1 Leaf segmentation with examples of provided images and ground truth data. Top two rows: Arabidopsis. Bottom two rows: Tobacco


Selected Publications

Krajsek, K.,Menzel, M.I., Scharr, H.,2016. A Riemannian Bayesian Framework for Estimating Diffusion Tensor Images. International Journal of ComputerVision 120, 272-299.

Krajsek, K., Menzel, M. I., Scharr, H. 2009. Riemannian Bayesian Estimation of Diffusion Tensor Images. The 12th IEEE International Conference on Computer Vision (ICCV 2009); Seminar, Kyoto, 27 Sep 2009

Schuchert, T., Scharr, H., 2010. Estimation of 3D Object Structure, Motion and Rotation Based on 4D Affine Optical Flow Using a Multi-camera Array, In: Daniilidis,K., Maragos, P., Paragios, N.(Eds.), Computer Vision – ECCV2010: 11th European Conferenceon Computer Vision, Heraklion,Crete, Greece, September 5-11,2010, Proceedings, Part IV. Springer Berlin Heidelberg, Berlin, Heidelberg, pp.596-609.

Minervini, M., Fischbach, A., Scharr, H., Tsaftaris, S.A.,2016. Finely-grained annotated datasets for image-based plant phenotyping. PatternRecognition Letters 81, 80-89.

Scharr, H., Minervini, M.,French, A.P., Klukas, C.,Kramer, D.M., Liu, X., Luengo,I., Pape, J.-M., Polder, G.,Vukadinovic, D., Yin, X.,Tsaftaris, S.A., 2016. Leaf segmentation in plant phenotyping: a collation study. Machine Vision and Applications 27,585-606.


Servicemeu

Homepage