Checking food quality is crucial in food production and its commercialization. In this context, the analysis of macroscopic visual properties, like shape, color, and texture, plays an important role as a first assessment of food quality. Currently, such an analysis is mostly performed by human experts, who observe, smell, taste the food, and judge it based on their training and experience. Such an assessment is usually subjective, time-consuming, and expensive, so it is of great interest to support it with automated and objective advanced computer vision tools. In this paper, we present a deep learning method to estimate the rind thickness of Trentingrana cheese from color images acquired in a controlled environment. Rind thickness is a key feature for the commercial selection of this cheese and is commonly considered to evaluate its quality. We tested our method on 90 images of cheese slices, where we defined the ground-truth rind thickness using the measures provided by a panel of 12 experts. Our method achieved a Mean Absolute Error (MAE) of ≈ 0.5 mm, which is half the ≈ 1.2 mm error produced on average by the experts with respect to the defined ground-truth.

A Deep Learning Approach for Estimating the Rind Thickness of Trentingrana Cheese from Images / Caraffa, Andrea; Ricci, Michele; Lecca, Michela; Modena, Carla Maria; Aprea, Eugenio; Gasperi, Flavia; Messelodi, Stefano. - 1:(2023), pp. 76-83. (Intervento presentato al convegno Proceedings of the 3rd International Conference on Image Processing and Vision Engineering - IMPROVE tenutosi a Prague nel 21-23, April 2023) [10.5220/0011830000003497].

A Deep Learning Approach for Estimating the Rind Thickness of Trentingrana Cheese from Images

Ricci, Michele;Aprea, Eugenio;Gasperi, Flavia;Messelodi, Stefano
2023-01-01

Abstract

Checking food quality is crucial in food production and its commercialization. In this context, the analysis of macroscopic visual properties, like shape, color, and texture, plays an important role as a first assessment of food quality. Currently, such an analysis is mostly performed by human experts, who observe, smell, taste the food, and judge it based on their training and experience. Such an assessment is usually subjective, time-consuming, and expensive, so it is of great interest to support it with automated and objective advanced computer vision tools. In this paper, we present a deep learning method to estimate the rind thickness of Trentingrana cheese from color images acquired in a controlled environment. Rind thickness is a key feature for the commercial selection of this cheese and is commonly considered to evaluate its quality. We tested our method on 90 images of cheese slices, where we defined the ground-truth rind thickness using the measures provided by a panel of 12 experts. Our method achieved a Mean Absolute Error (MAE) of ≈ 0.5 mm, which is half the ≈ 1.2 mm error produced on average by the experts with respect to the defined ground-truth.
2023
IMPROVE 2023 Proceedings of the 3rd International Conference on Image Processing and Vision Engineering
Setúbal, Portugal
SCITEPRESS Digital Library
978-989-758-642-2
Caraffa, Andrea; Ricci, Michele; Lecca, Michela; Modena, Carla Maria; Aprea, Eugenio; Gasperi, Flavia; Messelodi, Stefano
A Deep Learning Approach for Estimating the Rind Thickness of Trentingrana Cheese from Images / Caraffa, Andrea; Ricci, Michele; Lecca, Michela; Modena, Carla Maria; Aprea, Eugenio; Gasperi, Flavia; Messelodi, Stefano. - 1:(2023), pp. 76-83. (Intervento presentato al convegno Proceedings of the 3rd International Conference on Image Processing and Vision Engineering - IMPROVE tenutosi a Prague nel 21-23, April 2023) [10.5220/0011830000003497].
File in questo prodotto:
File Dimensione Formato  
118300.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 958.89 kB
Formato Adobe PDF
958.89 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/377930
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact