Dipartimento di Ingegneria Informatica, Modellistica, Elettronica e Sistemistica - Tesi di Dottorato
Permanent URI for this collectionhttp://localhost:4000/handle/10955/31
Questa collezione raccoglie le Tesi di Dottorato afferenti al Dipartimento di Ingegneria Informatica, Modellistica, Elettronica e Sistemistica dell'Università della Calabria.
Browse
38 results
Search Results
Item Bio-inspired techniques applied to the coordination of a swarm of robots involved in multiple tasks(2017-11-13) Palmieri, Nunzia; Crupi, Felice; Marano, Salvatore; Yang, Xin-SheLa tematica di ricerca trattata in questa tesi riguarda il problema di coordinamento di robot attraverso l’utilizzo di algoritmi decentralizzati che usano meccanismi basati sulla Swarm Intelligence. Tali tecniche hanno lo scopo di migliorare le capacità di ogni robot, ciascuno dei quali ha risorse limitate, nel prendere decisioni su dove muoversi o su cosa fare basandosi su semplici regole ed interazioni locali. Negli ultimi anni, infatti, c’è un crescente interesse a risolvere alcuni problemi nell’ambito della robotica attraverso algoritmi che traggono ispirazione da fenomeni naturali e da alcuni animali in natura che esibiscono comportamenti sociali sviluppati e con una notevole capacità di adattamento ambientale. Nel campo della robotica, un aspetto cruciale è la coordinazione dei robot affinché possano compiere dei task in maniera cooperativa. La coordinazione deve essere tale da permettere agli agenti di adattarsi alle condizioni dinamiche dell’ambiente circostante conferendo al sistema caratteristiche di robustezza, flessibilità e affidabilità. Più dettagliatamente, lo scenario di riferimento è un’area nella quale sono disseminati degli oggetti, e dove operano un certo numero di robot che hanno come scopo quello di rilevare gli oggetti stessi e manipolarli. Ciascun robot non conosce la posizione di tali oggetti e non ha conoscenza né dell’ambiente che lo circonda, né della posizione degli altri robot. Il problema è diviso in due sotto-problemi. Un primo problema riguarda l’esplorazione dell’area e l’altro la manipolazione degli oggetti. Essenzialmente, ogni robot esplora in maniera indipendente l’ambiente basandosi sulla propria posizione attuale e sulla posizione degli altri mediante un meccanismo di comunicazione indiretta (stigmergia). Nella fase di manipolazione degli oggetti, invece, è utilizzato un meccanismo di comunicazione diretta attraverso l’uso di una comunicazione wireless. L’algoritmo di esplorazione dell’area trae ispirazione dal comportamento di alcuni tipi di insetti in natura, come le formiche,che utilizzano l’ambiente nel quale vivono come mezzo di comunicazione (stigmergia).Successivamente, quando un robot rileva la presenza di un oggetto, sono stati proposti due approcci. Nel primo caso le informazioni sono diffuse tra i robot secondo un meccanismo di comunicazione“one hop”ed alcune meta-euristiche di derivazione naturale sono state utilizzate come meccanismo decisionale e di coordinamento Il secondo approccio fa riferimento ad una comunicazione “multi-hop” ed è stato proposto un protocollo di coordinamento, anche esso di derivazione biologica. Entrambi gli approcci si basano su meccanismi decentralizzati dove non esiste nessun leader che dia direttive gerarchiche e ciascun robot prende le sue decisioni in maniera autonoma sulla base degli eventi che accadono nell’ambiente. Globalmente si ha un sistema auto organizzato, flessibile ed altamente adattabile. Per testare gli approcci è stato costruito un simulatore sul quale sono stati sviluppati numerosi studi allo scopo di valutare gli algoritmi proposti, la loro efficienza nonché stimare come le principali variabili ed i parametri del modello possono influenzarela soluzione finale.Item Design of point contact solar cell by means of 3D numerical simulations(Università della Calabria -Dottorato di Ricerca in Information and Communication Engineering For Pervasive Intelligent Environments, 2017-11-13) Guerra González, Noemi Lisette; Crupi, FeliceNikola Tesla said that "the sun maintains all human life and supplies all human energy". As a matter of fact, sun furnishes with energy all forms of living, e.g., starting from the photosynthesis process, plants absorb solar radiation and convert it into stored energy for growth and development, thus supporting life on earth. For this reason, sun is considered one of the most important and plentiful sources of renewable energies. This star is about 4.6 billion years old with another 5 billion years of hydrogen fuel to burn in its lifetime. This characteristic gives to all living creatures a sustainable and clean energy source that will not run out anytime soon. In particular, solar power is the primary source of electrical and thermal energy, produced by directly exploiting the highest levels of the irradiated energy from the sun to our planet. Therefore, solar energy offers many benefits such as no-releasing greenhouse gases (GHGs) or other harmful gases in the atmosphere, it is economically feasible in urban and rural areas, and evenly distributed across the planet. Moreover, as it was mentioned above, solar power is also essentially infinite, reason why it is close to be the largest source of electricity in the world by 2050. On the other hand, most of the energy forms available on earth arise directly from the solar energy, including wind, hydro, biomass and fossil fuels, with some exceptions like nuclear and geothermal energies. Accordingly, solar photovoltaic (PV) is a technology capable of converting the inexhaustible solar energy into electricity by employing the electronic properties of semiconductor materials, representing one of the most promising ways for generating electricity, as an attainable and smart option to replace conventional fossil fuels. PV energy is also a renewable, versatile technology that can be used for almost anything that requires electricity, from small and remote applications to large, central power stations. Solar cell technology is undergoing a transition to a new generation of efficient, low-cost products based on certain semiconductor and photoactive materials. Furthermore, it has definite environmental advantages over competing electricity generation technologies, and the PV industry follows a pro-active life-cycle approach to prevent future environmental damage and to sustain these advantages. An issue with potential environmental implications is the decommissioning of solar cell modules at the end of their useful life, which is expected to about 30 years. A viable answer is recycling or re-used them in some ways when they are no longer useful, by implementing collection/recycling infrastructure based on current and emerging technologies. Some feasibility studies show that the technology of end-of-life management and recycling of PV modules already exists and costs associated with recycling are not excessive. In particular, Photovoltaic is a friendly and an excellent alternative to meet growing global energy-demand by producing clean and sustainable electricity that can replace conventional fossil fuels and thus reducing the negative greenhouse effects (see section 1.1). Reasoning from this fact, solar cell specialists have been contributing to the development of advanced PV systems from a costly space technology to affordable terrestrial energy applications. Actually, since the early 1980s, PV research activities have been obtaining significant improvements in the performance of diverse photovoltaic applications. A new generation of low-cost products based on thin films of photoactive materials (e.g., amorphous silicon, copper indium diselenide (CIS), cadmium telluride (CdTe), and film crystalline silicon) deposited on inexpensive substrates, increase the prospects of rapid commercialization. In particular, the photovoltaic industry has focused on the development of feasible and high-efficiency solar cell devices by using accessible semiconductor materials that reduce production costs. Nonetheless, photovoltaic applications must improve their performance and market competitiveness in order to increase their global install capacity. In this context, the design of innovative solar cell structures along with the development of advanced manufacturing processes are key elements for the optimization of a PV system. Nowadays, TCAD modeling is a powerful tool for the analysis, design, and manufacturing of photovoltaic devices. In fact, the use of a properly calibrated TCAD model allows investigating the operation of the studied solar cells in a reliable and a detailed way, as well as identifying appropriate optimization strategies, while reducing costs, test time and production. Thereby, this Ph.D. thesis is focused on a research activity aimed to the analysis and optimization of solar cells with Interdigitated Back Contact (IBC) crystalline silicon substrate c-Si, also known as Back Contact-Back Junction (BC-BJ). This type of solar cell consists of a design where both metal contacts are located on the bottom of the silicon wafer, simplifying the cell interconnection at module-level. Characteristics that guarantee high-conversion efficiency due to the absence of front-contact shadowing losses. In particular, the main purpose of this thesis is to investigate the dominant physical mechanisms that limit the conversion efficiency of these devices by using electro-optical numerical simulations. Three-dimensional (3D) TCAD-based simulations were executed to analyze the performance of an IBC solar cell featuring point-contacts (PC) as a function of the metallization fraction. This scheme was also compared with a similar IBC structure featuring linear-contacts (LC) on the rear side of the device. In addition, the impact of introducing a selective emitter scheme (SE) in the PC cell was evaluated. The analyses were carried out by varying geometric and/or process parameters (for example, the size and shape of metalcontacts, doping profiles, carrier lifetime, and recombination rates). This approach provides a realistic and an in-depth view of the behavior of the studied IBC solar cells and also furnishes with useful information to optimize the architecture design of the device in order to enhance the conversion efficiency and minimize production costs.Item Efficient incremental algorithms for handling graph data(2017-11-13) Quintana Lopez, Ximena Alexandra; Crupi, Felice; Greco, Sergio;Item Data mining techniques for large and complex data(2017-11-13) Narvaez Vilema, Miryan Estela; Crupi, Felice; Angiulli, FabrizioDuring these three years of research I dedicated myself to the study and design of data mining techniques for large quantities of data. Particular attention was devoted to training set condensing techniques for the nearest-neighbor classification rule and to techniques for node anomaly detection in networks. The first part of this thesis was focused on the design of strategies to reduce the size of the subset extracted from condensing techniques and to their experimentation. The training set condensing techniques aim to determine a subset of the original training set having the property of allowing to correctly classify all the training set examples. The subset extracted from these techniques also known as consistent subset. The result of the research was the development of various strategies of subset selection, designed to determine during the training phase the most promising subset based on different methods of estimating test accuracy. Among them, the PACOPT strategy is based on Pessimistic Error Estimate (PEE) to estimate generalization as a trade-off between training set accuracy and model complexity. The experimental phase has had for reference the FCNN technique of condensation. Among the methods of condensation based on the nearest neighbor decision rule (NN rule), FCNN (for Fast Condensed NN) it is one of the most advantageous technique, particularly in terms of time performance. We showed that the designed selection strategies guarantee to preserve the accuracy of a consistent subset. We also demonstrated that the proposed selection strategies guarantee to significantly reduce the size of the model. Comparison with notable training-set reduction techniques for the NN rule witness for state-of-the-art performances of the here introduced strategies. The second part of the thesis is directed towards the design of analysis tools for network structured data. Anomaly detection is an area that has received much attention in recent years. It has a wide variety of applications, including fraud detection and network intrusion detection. The techniques focused on anomaly detection in static graphs assume that the networks do not change and are capable of representing only a single snapshot of data. As real-world networks are constantly changing, there has been a shift in focus to dynamic graphs, which evolve over time. We present a technique for node anomaly detection in networks where arcs are annotated with time of creation. The technique aims at singling out anomalies by taking simultaneously into account information concerning both the structure of the network and the order in which connections have been established. The latter information is obtained by timestamps associated with arcs. A set of temporal structures is induced by checking certain conditions on the order of arc appearance denoting different kinds of user behaviors. The distribution of these structures is computed for each node and used to detect anomalies. We point out that the approach here investigated is substantially different from techniques dealing with dynamic networks. Indeed, our aim is not to determine the points in time in which a certain portion of the networks (typically a community or a subgraph) exhibited a significant change, as usually done by dynamic-graph anomaly detection techniques. Rather, our primary aim is to analyze each single node by taking simultaneously into account its temporal footprint.Item Design of back contact solar cells featuring metallization schemes with multiple emitter contact lines based on TCAD numerical simulations(2017-11-13) Guevara Granizo, Marco Vinicio; Crupi, FeliceThe most hard-working goal within PV community is to design and manufacture devices featuring high-efficiency at low-cost with the better reliability as possible. The key to achieving this target is to optimize and improve the current fabrication processes as well as the layouts of the devices. TCAD modeling of PV devices turns out to be a powerful tool that lowers laboratory manufacturing costs and accelerates optimization processes by bringing guidelines of how to do it. The modeling in TCAD examines the designs before their implementation, accurately predicting its real behavior. When simulations are correctly calibrated, by changing simulations’ parameters, allow finding ways to improve designs’ parameters or just understand better the internal functioning of these devices. In this regard, this Ph.D. thesis fairly treats the electro-optical numerical simulations of interdigitated back-contact (IBC) c-Si solar cells, which nowadays is the architecture to which industry is trying to pull forward because of its numerous advantages. Among the benefits of this design are their improved efficiency due to the absence of front optical shading or the relative simplicity regarding their massive production. The aim of this thesis, it is focusing on providing guidelines of the optimal design parameters of IBC solar cells, based on the state-of-the-art of advanced numerical simulations. Two main topics are treated, (i) the development of a simplified method to compute the optical profiles ten times faster than the traditional one and (ii) an extensive study on the impact of adding multiple striped metal contacts throughout the emitter region improving the efficiency by reducing the inner series resistance. It was performed a large number of ad-hoc calibrated simulations that sweep wide ranges of modeling parameters (i.e., changing geometric sizes, doping profiles, carriers’ lifetimes, and recombination rates) to investigate their influence over the device operation, allowing to identify the most critical ones. This insight leads a better understanding of this kind of solar cells and helps to appraise ways to refine structures and enhance layouts of real devices for either laboratory or industry.Item Ensemble learning techniques for cyber security applications(2017-07-13) Pisani, Francesco Sergio; Crupi, Felice; Folino, GianluigiCyber security involves protecting information and systems from major cyber threats; frequently, some high-level techniques, such as for instance data mining techniques, are be used to efficiently fight, alleviate the effect or to prevent the action of the cybercriminals. In particular, classification can be efficiently used for many cyber security application, i.e. in intrusion detection systems, in the analysis of the user behavior, risk and attack analysis, etc. However, the complexity and the diversity of modern systems opened a wide range of new issues difficult to address. In fact, security softwares have to deal with missing data, privacy limitation and heterogeneous sources. Therefore, it would be really unlikely a single classification algorithm will perform well for all the types of data, especially in presence of changes and with constraints of real time and scalability. To this aim, this thesis proposes a framework based on the ensemble paradigm to cope with these problems. Ensemble is a learning paradigm where multiple learners are trained for the same task by a learning algorithm, and the predictions of the learners are combined for dealing with new unseen instances. The ensemble method helps to reduce the variance of the error, the bias, and the dependence from a single dataset; furthermore, it can be build in an incremental way and it is apt to distributed implementations. It is also particularly suitable for distributed intrusion detection, because it permits to build a network profile by combining different classifiers that together provide complementary information. However, the phase of building of the ensemble could be computationally expensive as when new data arrives, it is necessary to restart the training phase. For this reason, the framework is based on Genetic Programming to evolve a function for combining the classifiers composing the ensemble, having some attractive characteristics. First, the models composing the ensemble can be trained only on a portion of the training set, and then they can be combined and used without any extra phase of training. Moreover the models can be specialized for a single class and they can be designed to handle the difficult problems of unbalanced classes and missing data. In case of changes in the data, the function can be recomputed in an incrementally way, with a moderate computational effort and, in a streaming environment, drift strategies can be used to update the models. In addition, all the phases of the algorithm are distributed and can exploits the advantages of running on parallel/ distributed architectures to cope with real time constraints. The framework is oriented and specialized towards cyber security applications. For this reason, the algorithm is designed to work with missing data, unbalanced classes, models specialized on some tasks and model working with streaming data. Two typical scenarios in the cyber security domain are provided and some experiment are conducted on artificial and real datasets to test the effectiveness of the approach. The first scenario deals with user behavior. The actions taken by users could lead to data breaches and the damages could have a very high cost. The second scenario deals with intrusion detection system. In this research area, the ensemble paradigm is a very new technique and the researcher must completely understand the advantages of this solution.Item Problemi di Allocazione in Giochi Cooperativi: Approssimazioni e Casi Trattabili per il Calcolo del Valore di Shapley(2017-07-26) Mendicelli, Angelo; Scarcello, Francesco; Crupi, FeliceItem Trust management analysis and proposal of trust-based energy-efficient intrusion detection system for wireless ad-hoc networks(2017-07-26) Lupia, Andrea; Crupi, Felice; De Rango, FlorianoItem mm-Wave Antennas For Satellite And Mobile Communications(2017-07-26) Greco, Francesco; Crupi, Felice; Amendola, GiandomenicoEver growing demands for higher data rate and bandwidth are pushing wireless applications to millimetre-wave band (3-300GHz), where sufficient bandwidth is available and high performance can be achieved without using complex modulation systems. In addition to Telecom applications, millimetre-wave bands has enabled novel short range and long range radar sensors for automotive as well as high resolution imaging systems for medical and security. The major obstacle for the wide deployment of commercial wireless and radar systems in this frequency range is the high cost of the overall system. The main object of this work is to investigate and to develop different type of antenna that could be applied in satellite communications, in the future fifth generation of mobile networks and on radar automotive systems. In particular, in the present thesis, four antennas have been developed. A dual band dual polarized antenna array for Satcom-On-The-Move application that gives the possibility to obtain a fixed beam on the frequency range 19-21GHz (Rx band) and 29-31GHz (Tx band). This structure could be used in association with a fully mechanical pointing system. A Reflector terminal in the Q/V bands that could be use on the backhaul point of the 5G future architecture. A Ka band scaled version of the antenna has been realized and measured proving that it can be a valid solution for compact earth terminals. A novel reflectarray with potential applications in both satellite communications and high speed point-to-point radio links. Finally, a 77GHz transmit-array antenna mounted on a QFN package. This kind structure, due to the compact dimensions, could be represent a possible solution for automotive radar systemItem Requirements engineering for complex systems(2017-07-26) Gallo, Teresa; Saccà, Domenico; Furfaro, Angelo; Garro, Alfredo; Crupi, FeliceRequirements Engineering (RE) is a part of Software Engineering and, in general, of System Engineering. RE aims to help in building software which satis es the user needs, eliciting, documenting, validating and maintaining the requirements that a software has to adequately satisfy. During its 30 years of RE history, its importance has been perceived with various degrees, from being the most important activity, well formalized and de ned in big complete documents which were the bible of the software project, to the opposite side where it has been reduced to just an informal activity, volatile, never formalized, not at all maintained, because ever changing. The need for well managing requirements is extremely important, mainly for complex systems which involve great investments of resources and/or cannot be easily substituted. A system can be complex because it is realized by the collaboration of a numerous and heterogeneous set of stakeholders, as for example in a big industrial research project, often co-funded with public resources, where usually many partners, with di erent backgrounds and languages must cooperate for reaching the project goals. Furthermore, a system can be complex because it constitutes the IT system of an Enterprise, which has been grown, along the time, by adding many pieces of software, integrated in many and di erent ways; the IT system is often distributed, ubiquitously interoperates on many computers, and behaves as a whole big system, though developed by many software providers, at di erent times, with di erent technologies and tools. The complexity of these systems is highly considered for several critical industrial domains where features of real-time and fault-tolerance are vital, such as automotive, railway, avionics, satellite, health care and energy; in these domains a great variety of systems are usually designed and developed by organizing and integrating existing components that pool their resources and capabilities to create a new system which is able to o er more functionalities and performances than those o ered by the simple sum of its components. Typically, the design and management of such systems, best known as System of Systems (SoS), have properties not immediately de ned, derived and easily analyzed starting from the properties of their stand-alone parts. For these reasons, SoS requires suitably engineered methods, tools and techniques, for managing requirements and any other construction process phase, with the aim to minimize whichever risk of fail. However, every complex IT system, even though it does not deal with a critical domain, but it supports the core business of an enterprise, must be well governed to avoid the risk of becoming rapidly inadequate to its role. This risk becomes high when many uncontrolled IT developments, aimed at supporting requirements changes, accumulate. In fact, as the complexity grows up, the IT system might become too expensive to maintain and then it should be retired and substituted after some too short time, often with big and underestimated di culties. For these reasons, complex systems must be governed during their evolution, both from the point of view of 'which application is where and why', and from the point of view of the supported requirements, that is 'which need is supported by each application and for whom'. This governance would facilitate the knowledge, the management, the essentialness and the maintenance of the complex systems, by allowing e cient support and a long-lasting system, with the consequence of minimizing waste of costs and inadequacy of the support for core business of the enterprise. This work addresses mainly the issue of governing systems which are complex because either they are the result of the collaboration of many di erent stakeholders (e.g. are big co-funded R&D projects) or they are Enterprise Information Systems (EIS) (e.g. IT system of medium/large enterprises). In this direction, a new goal-oriented requirements methodology, named GOReM, was de ned which has speci c features useful for the addressed issues. In addition a new approach, ResDevOps, has been conceived, that allows to re ne the government of the requirements of an EIS which is continuously improved, and which increases and evolves along the time. The thesis presents the framework of state of the art in which these activities found their collocation, together with a set of case studies which were developed inside some real projects, mainly big projects of R&D which have seen involved the University of Calabria, but also some cases in real industrial projects. The main results were published and were included in international conference proceedings and a manuscript is in press on an international journal.