Dipartimento di Matematica e Informatica - Tesi di Dottorato

Permanent URI for this collectionhttp://localhost:4000/handle/10955/103

Questa collezione raccoglie le Tesi di Dottorato afferenti al Dipartimento di Matematica e Informatica dell'Università della Calabria.

Browse

Search Results

Now showing 1 - 10 of 71
  • Item
    Declarative solutions for the the Manipulation of Articulated Objects Using Dual-Arm Robots
    (Università della Calabria, 2020-03-17) Bertolucci, Riccardo; Leone, Nicola; Maratea, Marco; Mastrogiovanni, Fulvio
    The manipulation of exible object is of primary importance in industry 4.0 and in home environments scenarios. Traditionally, this problem has been tackled by developing ad-hoc approaches, that lack of exibility and portability. We propose an approach in which a exible object is modelled as an articulated object, or rather, a set of links connect via joints. In this thesis we present a framework based on Answer Set Programming (ASP) for the automated manipulation of articulated objects in a robot architecture. In particular, ASP is employed for representing the con guration of the articulated object, for checking the consistency of the knowledge base, and for generating the sequence of manipulation actions. The framework is exempli ed and validated on the Baxter dual-arm manipulator on a simple reference scenario in which we carried out di erent experiments analysing the behaviours of di erent strategies for the action planning module. Our aim is to have an understanding of the performances of these approaches with respect to planning time and execution time as well. Then, we extend such scenario for having a higher accuracy of the setup, and to introduce a few constraints in robot actions execution to improve their feasibility. Given the extended scenario entails a high number of possible actions that can be fruitfully combined, we exploit macro actions from automated planning in the module that generates the sequence of actions, in order to deal with this extended scenario in a more e ective way. Moreover, we analyse the possibilities of mixed encodings with both simple actions and macro actions from automated planning in di erent "concentrations". We nally validate the framework also on this extended scenario, con rming the applicability of ASP also in this complex context, and showing the usefulness of macro actions in this robotics application.
  • Item
    Arsenic Ore Mixture Froth Image Generation with Neural Networks and a Language for Declarative Data Validation
    (Università della Calabria, 2022-04-14) Zamayla, Arnel; Greco, Gianluigi; Alviano, Mario; Dodaro, Carmine
    Computer vision systems that measure froth flow velocities and stability designed for flotation froth image analysis are well established in industry, as they are used to control material recovery. However flotation systems that has limited data has not been explored in the same fashion bearing the fact that big data tools like deep convolutional neural networks require huge amounts of data. This lead to the motivation of the research reported in the first part of this thesis, which is to generate synthetic images from limited data in order to create a froth image dataset. The image synthesis is possible through the use of generative adversarial network. The performance of human experts in this domain in identifying the original and synthesized froth images were then compared with the performance of the models. The models exhibited better accuracy levels by average on the tests that were performed. The trained classifier was also compared with some of the established neural network models in the literature like the AlexNet, VGG16 ang ResNet34. Transfer learning was used as a method for this purpose. It also showed that these pretrained networks that are readily available have better accuracy by average comapared to trained experts. The second part of this thesis reports on a language designed for data validation in the context of knowledge representation and reasoning. Specifically, the target language is Answer Set Programming (ASP), a logic-based programming language widely adopted for combinatorial search and optimization, which however lacks constructs for data validation. The language presented in this thesis fulfills this gap by introducing specific constructs for common validation criteria, and also supports the integration of consolidated validation libraries written in Python. Moreover, the language is designed so to inject data validation in ordinary ASP programs, so to promote fail-fast techniques at coding time without imposing any lag on the deployed system if data are pretended to be valid.
  • Item
    Reasoning in highly dynamic environments
    (Università della Calabria, 2021-07-03) Pacenza, Francesco; Greco, Gianluigi; Ianni, Giovambattista; Zangari, Jessica
  • Item
    Large-scale ontology-mediated query answering over OWL 2 RL ontologies
    (Università della Calabria, 2022-03-11) Fiorentino, Alessio; Greco, Gianluigi; Manna, Marco
    Ontology-mediated query answering (OMQA) is an emerging paradigm at the basis of many semantic-centric applications. In this setting, a conjunctive query has to be evaluated against a logical theory (knowledge base) consisting of an extensional database paired with an ontology, which provides a semantic conceptual view of the data. Among the formalisms that are capable to express such a conceptual layer, the Web Ontology Language OWL is certainly the most popular one. Reasoning over OWL is a very expensive task, in general. For that reason, expressive yet decidable fragments of OWL have been identi ed. Among them, we focus on OWL 2 RL, which o ers a rich variety of semantic constructors, apart from supporting all RDFS datatypes. Although popular Web resources|such as DBpedia|fall in OWL 2 RL, only a few systems have been designed and implemented for this fragment. None of them, however, fully satisfy all the following desiderata: (i) being freely available and regularly maintained; (ii) supporting SPARQL queries; (iii) properly applying the sameAs property without adopting the unique name assumption; (iv) dealing with concrete datatypes. This thesis aims to provide a contribution in this setting. Primarily, we present DaRLing: an open-source Datalog rewriter for OWL 2 RL ontological reasoning under SPARQL queries. We describe its architecture, the rewriting strategies it implements, and the result of an experimental evaluation that demonstrates its practical applicability. Then, to reduce memory consumption and possibly optimize execution times of Datalog queries over large databases, we introduce novel techniques to determine an optimal indexing schema together with suitable body-orderings for Datalog rules, based on the concept of optimal evaluation plan. The ASP encoding of a planner for the computation of such plans is provided and explained in detail. The new approach is then compared with the standard execution plans implemented in stat-of-the-art Datalog systems over widely used ontological benchmarks.
  • Item
    A logic-based decision support system for the diagnosis of headache disorders according to the ichd - 3 international classification
    (Università della Calabria, 2022-04-21) Costabile, Roberta; Manna, Marco; Greco, Gianluigi
  • Item
    Towards an effective and explainable AI: studies in the biomedical domain
    (Università della Calabria, 2021-07-05) Bruno, Pierangela; Greco, Gianluigi; Calimeri, Francesco
    Providing accurate diagnoses of diseases and maximizing the effectiveness of treatments requires, in general, complex analyses of many clinical, omics, and imaging data. Making a fruitful use of such data is not straightforward, as they need to be properly handled and processed in order to successfully perform medical diagnosis. This is why Artificial Intelligence (AI) is largely employed in the field. Indeed, in recent years, Machine Learning (ML), and in particular Deep Learning (DL), techniques emerged as powerful tools to perform specific disease detection and classification, thus providing significant support to clinical decisions. They gained a special attention in the scientific community, especially thanks to their ability in analyzing huge amounts of data, recognizing patterns, and discovering non-trivial functional relationships between input and output. However, such approaches suffer, in general, from the lack of proper means for interpreting the choices made by the learned models, especially in the case of DL ones. This work is based on both a theoretical and methodological study of AI techniques suitable for the biomedical domain; furthermore, we put a specific focus on the practical impact on the application and adaptation of such techniques to relevant domain. In this work, ML and DL approaches have been studied and proper methods have been developed to support (i) medical imaging diagnostic and computer-assisted surgery via detection, segmentation and classification of vessels and surgical tools in intra-operative images and videos (e.g., cine-angiography), and (ii) data-driven disease classification and prognosis prediction, through a combination of data reduction, data visualization and classification of high-dimensional clinical and omics data, to detect hidden structural properties useful to investigate the progression of the disease. In particular, we focus on defining a novel approach for automated assessment of pathological conditions, identifying latent relationships in different domains and supporting healthcare providers in finding the most appropriate preventive interventions and therapeutic strategies. Furthermore, we propose a study about the analysis of the internal processes performed by the artificial networks during classification tasks, with the aim to provide a AI-based model explainability. This manuscript is presented in four parts, each focusing on a special aspect of DL techniques and offering different examples of their application in the biomedical domain. In the first part we introduce clinical and omics data along with the popular processing methods to improve the analyses; we also provide an overview of the main DL techniques and approaches aimed at performing disease prediction and prevention and at identifying bio-markers via biomedical data and images. In the second part we describe how we applied DL techniques to perform the segmentation of vessels in the ilio-femoral images. Furthermore, we propose a combination of multi-instance segmentation network and optical flow to solve the multiinstance segmentation and detection tasks in endoscopic images. In the third part a combination of data reduction and data visualization techniques is proposed for the reduction of clinical and omics data and their visualization into images, with the aim of performing DL-based classification. Furthermore, we present a ML-based approach to develop a risk model for class prediction from high-dimensional gene expression data, for the purpose of identifying a subset of genes that may influence the survival rate of specific patients. Eventually, in the fourth part we provide a study on the behaviour of AI-based systems during classification tasks, such as image-based disease classification, which is a widely studied topic in the recent years; more in detail, we show how DL-based systems can be studied with the aim of identifying the most relevant elements involved in the training processes and validating the network’s decisions, and possibly the clinical treatment and recommendation.
  • Item
    Surfaces with Prym-canonical hyperplane sections
    (Università della Calabria, 2021-05-17) Anelli, Martina; Ciliberto, Ciro; Galati, Concettina; Knutsen, Andreas Leopold; Greco, Gianluigi
    Uno dei principali problemi della geometria algebrica e la classi cazione delle variet a algebriche a meno di isomor smi o di equivalenza birazionale. Mentre il problema della classi cazione di curve algebriche e essenzialmente risolto, il problema della classi cazione di super ci presenta ancora qualche area sconosciuta. L'argomento di ricerca discusso in questa tesi rientra in quello della classi cazione di super ci proiettive complesse con sezioni iperpiane Prym-canoniche. I soli esempi conosciuti di questo tipo di super cie sono la super cie di Enriques ed una super cie in P5 di grado 10 ottenuta come immersione dello scoppiamento di P2 nei 10 nodi di una curva piana razionale irriducibile di grado 6. Noi diciamo che una super cie X ha sezioni iperpiane Prym-canoniche se pu o essere realizzata birazionalmente in qualche spazio proiettivo Pg􀀀1, per g 5, tale che una generica sezione iperpiana C di X e una curva liscia di genere g immersa Prymcanonicamente. Mostreremo che una super cie con sezioni iperpiane Prym-canoniche pu o essere birazionalmente equivalente o ad una super cie di Enriques o a P2, ed in tal caso pu o contenere soltanto punti doppi razionali come singolarit a, oppure ad una super cie rigata su una curva base di genere q 0. In quest'ultimo caso, la somma dei generi geometrici delle singolarit a di X e uguale al genere geometrico della curva base q. La propriet a generale di queste super ci e che, se : X0 ! X e la risoluzione minimale delle singolarit a di X, allora esiste solo un divisore antibicanonico e ettivo su X0 il cui supporto e contenuto nel luogo eccezionale di . Dal momento che le super ci di Enriques sono gi a state studiate da diversi autori, costruiremo nuove super ci con sezioni iperpiane Prym-canoniche birazionalmente equivalenti a super ci rigate o a P2. Il metodo per costruire esempi di questo tipo di super ci consiste nel trovare sistemi lineari L00 su super ci minimali X00 (super ci rigate o P2) tali che, dopo aver scoppiato tutti i punti base di L00 per ottenere X0, la trasformata stretta L0 di L00 e disgiunta dal solo divisore antibicanonico di X0 mentre il divisore anticanonico di X0 ristretto ad una generica curva di L0 e un divisore di torsione non-nullo.
  • Item
    Tight integration of Artificial Intelligence in Game Development Tools
    (Università della Calabria, 2020-03-11) Angilica, Denise; Greco, Gianluigi; Ianni, Giovambattista
    In this thesis we aim to narrow some of the gaps that prevent the adoption of declarative tools within highly dynamically changing environments, with a particular focus to the context of game development. Integrating reasoning modules, based on declarative speci cations, within the commercial game development life-cycle, poses a number of unsolved challenges, each with nonobvious solution. It is necessary to cope with strict time performance requirements; the duality between procedural code and declarative speci cations prevents easy integration; the concurrent execution of reasoning tasks and game updates requires proper information passing strategies between the two involved sides. In this context, we propose a framework that can be deployed within the well-known Unity game development engine. The so-called ThinkEngine framework allows to embed reasoning modules, based on knowledge representation techniques, within the game logic. ThinkEngine respects the Unity development philosophy, and is properly integrated both at design-time and at run-time. A use case is reported about, showing the potential of the proposed infrastructure.
  • Item
    Dyadic TGDs - A new paradigm for ontological query answering
    (Università della Calabria, 2022-03-11) Marte, Cinzia; Greco, Gianluigi; Manna, Marco; Guerriero, Francesca; Leone, Nicola
    Ontology-BasedQueryAnswering(OBQA)consistsinqueryingdata– bases bytakingontologicalknowledgeintoaccount.Wefocusona logical frameworkbasedonexistentialrulesor tuple generatingdepen- dencies (TGDs), alsoknownasDatalog±, whichcollectsthebasicde- cidable classesofTGDs,andgeneralizesseveralontologyspecification languages. While thereexistlotsofdifferentclassesintheliterature,inmost cases eachofthemrequiresthedevelopmentofaspecificsolverand, only rarely,thedefinitionofanewclassallowstheuseofexisting systems. Thisgapbetweenthenumberofexistentparadigmsandthe numberofdevelopedtools,promptedustodefineacombinationof Shy and Ward (twowell-knownclassesthatenjoygoodcomputational properties)withtheaimofexploitingthetooldevelopedfor Shy. Nevertheless,studyinghowtomergethesetwoclasses,wehavereal- ized thatitwouldbepossibletodefine,inamoregeneralway,the combinationofexistingclasses,inordertomakethemostofexisting systems. Hence, inthiswork,startingfromtheanalysisofthetwoaforemen- tioned existingclasses,wedefineamoregeneralclass,named Dyadic TGDs, thatallowstoextendinauniformandelegantwayallthede- cidable classes,whileusingtheexistentrelatedsystems.Atthesame time, wedefinealsoacombinationof Shy and Ward, named Ward+, and weshowthatitcanbeseenasaDyadicsetofTGDs. Finally,tosupportthetheoreticalpartofthethesis,weimplementa BCQ evaluationalgorithmfortheclass Ward+, thattakesadvantage of anexistingsolverdevelopedfor Shy.
  • Item
    Ontology-driven information extraction
    (2017-07-20) Adrian, Weronika Teresa; Leone, Nicola; Manna, Marco
    Information Extraction consists in obtaining structured information from unstructured and semi-structured sources. Existing solutions use advanced methods from the field of Natural Language Processing and Artificial Intelligence, but they usually aim at solving sub-problems of IE, such as entity recognition, relation extraction or co-reference resolution. However, in practice, it is often necessary to build on the results of several tasks and arrange them in an intelligent way. Moreover, nowadays, Information Extraction faces new challenges related to the large-scale collections of documents in complex formats beyond plain text. An apparent limitation of existing works is the lack of uniform representation of the document analysis from multiple perspectives, such as semantic annotation of text, structural analysis of the document layout and processing of the integrated knowledge. The recent proposals of ontology-based Information Extraction do not fully exploit the possibilities of ontologies, using them only as a reference model for a single extraction method, such as semantic annotation, or for defining the target schema for the extraction process. In this thesis, we address the problem of Information Extraction from homogeneous collections of documents i.e., sets of files that share some common properties with respect to the content or layout. We observe that interleaving semantic and structural analysis can benefit the results of the IE process and propose an ontology-driven approach that integrates and extends existing solutions. The contributions of this thesis are of theoretical and practical nature. With respect to the first, we propose a model and a process of Semantic Information Extraction that integrates techniques from semantic annotation of text, document layout analysis, object-oriented modeling and rule-based reasoning. We adapt existing solutions to enable their integration under a common ontological view and advance the state-of-the-art in the field of semantic annotation and document layout analysis. In particular, we propose a novel method for automatic lexicon generation for semantic annotators, and an original approach to layout analysis, based on common labels identification and structure recognition. We design and implement a framework named KnowRex that realize the proposed methodology and integrates the elaborated solutions.