IIT Home Page CNR Home Page

Descriptive Sentence Extraction for Text to 3D Scene Generation

Three-dimensional objects (3D) allow extensive and heterogeneous information to be stored in single models which can be exploited by users to satisfy various research and study needs. Moreover, the 3D visualization would be even more interesting if it were the result of the “materialization” of descriptive sentences extrapolated from texts related to the subject matter. In other words, a direct connection between 3D models and the associated texts or drawings could provide a useful and stimulating explication of the case of study. The extrapolation of specific information from texts, however, is time-consuming and it requires the user to have in-depth knowledge of the referring domain. An innovative solution to the problem, then, would be to develop a system that can analyse and "comprehend" the documents in order to automatically provide, as output, portions of text containing geometric and spatial information useful for the 3D scenes generation. In this paper, the analysis of the framework of the above-mentioned system is presented and its implementation on a specific corpus concerning the “World City” project, is evaluated.

The Fourth International Conference on Big Data, SmallData, Linked Data andOpen Data (ALLDATA) - International Workshop on Knowledge Extraction and Semantic Annotation (KESA2018), Athens, Greece, 2018

External authors: Valentina Bova (Department of Informatics, Modeling, Electronics and System Engineering, University of Calabria Rende, Italy)
IIT authors:

Type: Contributo in atti di convegno
Field of reference: Information Technology and Communication Systems

File: paper_KESA2018_camera_ready_final.pdf

Activity: Divulgazione scientifica
Digital Humanities