In order to share knowledge about road situations vehicle-to-vehicle (V2V) communication is used. Autonomous driving vehicles are able to drive and park themself without driver interactions or presence, but are still inefficient about the drivers needs as they don't anticipate the users' behaviour. For instance, if a user wants to stop for quick grocery shopping, there is no need looking for long term parking in far distance, a short-term parking zone near the grocery shop would be adequate. To enable autonomous cars to make such decisions, they could benefit from awareness of their drivers' context. Knowledge about a users' activities and position may help to retrieve context information. To be able to describe the meaning of a visited place for user, we introduce a variant of semantic place labeling based on various sensor data. Data sourced by, e.g. smartphones or vehicles, is taken into account for gathering personalized context information, including Bluetooth, motion activity, status data and WLAN, and also to compensate for potential inaccuracies. For the classification of place types, over 80 features are generated for each stop. Thereby, geographic data is enriched with point of interest (POI)-information from different location-based context providers. In our experiments, we classify semantic categories of locations using parameter optimized multi-class and smart binary classifiers. An overall accuracy of 88.55% correctly classified stops is achieved using END classifier. A classification without GPS data yields an accuracy of 85.37%, demonstrating that alternative smartphone data can largely compensate for inaccurate localizations based on the fact of 88.55% accuracy, where GPS data was used. Knowing the semantics of a location, the provided context can be used to further personalize autonomous vehicles.
Sensor fusion for semantic place labeling / Roor, R.; Hess, J.; Saveriano, M.; Karg, M.; Kirsch, A.. - (2017), pp. 121-131. (Intervento presentato al convegno 3rd International Conference on Vehicle Technology and Intelligent Transport Systems, VEHITS 2017 tenutosi a prt nel 2017) [10.5220/0006365601210131].
Sensor fusion for semantic place labeling
Saveriano M.;
2017-01-01
Abstract
In order to share knowledge about road situations vehicle-to-vehicle (V2V) communication is used. Autonomous driving vehicles are able to drive and park themself without driver interactions or presence, but are still inefficient about the drivers needs as they don't anticipate the users' behaviour. For instance, if a user wants to stop for quick grocery shopping, there is no need looking for long term parking in far distance, a short-term parking zone near the grocery shop would be adequate. To enable autonomous cars to make such decisions, they could benefit from awareness of their drivers' context. Knowledge about a users' activities and position may help to retrieve context information. To be able to describe the meaning of a visited place for user, we introduce a variant of semantic place labeling based on various sensor data. Data sourced by, e.g. smartphones or vehicles, is taken into account for gathering personalized context information, including Bluetooth, motion activity, status data and WLAN, and also to compensate for potential inaccuracies. For the classification of place types, over 80 features are generated for each stop. Thereby, geographic data is enriched with point of interest (POI)-information from different location-based context providers. In our experiments, we classify semantic categories of locations using parameter optimized multi-class and smart binary classifiers. An overall accuracy of 88.55% correctly classified stops is achieved using END classifier. A classification without GPS data yields an accuracy of 85.37%, demonstrating that alternative smartphone data can largely compensate for inaccurate localizations based on the fact of 88.55% accuracy, where GPS data was used. Knowing the semantics of a location, the provided context can be used to further personalize autonomous vehicles.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione