John Magee, Stephen Sheridan, and Christina Thorpe, School of Informatics and Cybersecurity, Technological University Dublin, Ireland
The proliferation of mobile devices allows financial institutions to offer remote customer services, such as remote account opening. Manipulation of identity documents using image processing software is a low-cost, high-risk threat to modern financial systems, opening these institutions to fraud through crimes related to identity theft. In this paper we describe our exploratory research into the application of biomedical image algorithms to the domain of document recapture detection. We perform a statistical analysis to compare different types of recaptured documents and train a support vector machine classifier on the raw histogram data generated using the Meijering filter. The results show that there is potential in biomedical imaging algorithms, such as the Meijering filter, as a form of texture analysis that help identify recaptured documents.
Identity documents, document recapture detection, Meijering filter.
Benjamin Caillet1, Steve Dev`enes1, Gilbert Maˆıtre1, Darren Hight 2, Alessandro Mirra2, Olivier L Levionnois2, and Alena Simalatsar1, 1Institute of System Engineering, HES-SO Valais-Wallis, Sion, Switzerland, 2Vetsuisse Faculty, University of Bern, Switzerland, 3Inselspital – University Hospital of Bern, Switzerland
The electroencephalogram (EEG) is a collection of signals that represent the electrical activity of brain cells. Its analysis, widely used in the health, medical, and neuroscience domains, aims to capture changes in the pattern of waveforms. EEG signals are collected using small electrodes attached to the scalp. One well-known application of EEG signal analysis is general anaesthesia, which allows for anaesthesia individualization based on objective measures of EEG signals correlated with the Depth of Anaesthesia (DoA). Well-known examples of EEG-based DoA indices are the Bispectral Index (BIS) and the Patient State Index (PSI). However, it has been shown that the algorithms that compute these DoA indices are not substance specific and since they are proprietary it is impossible to improve them. The problem is even greater for the veterinary domain, because all existing algorithms are developed for humans and, therefore, they cannot be applied or adapted to veterinary practice. Hence, one must start from scratch and develop their signal processing code for EEG analysis. Moreover, signal processing algorithms are often well-known only to engineers and can be hard to understand for medical or veterinary staff. Nonetheless, they are the key people to define the ”signature of DoA” - signal features that are most relevant to DoA. In this paper, we present our General Anaesthesia Matlab-based Graphical User Interface (GAM-GUI) tool aimed at simplifying interdisciplinary communication in the process of developing novel DoA index formulation.
Tiago Sousa, Benoˆıt Ries, and Nicolas Guelfi, Department of Computer Science, University of Luxembourg, Esch-sur-Alzette, Luxembourg
United Nations have declared the current decade (2021-2030) as the ”UN Decade on Ecosystem Restoration” to join R&D forces to fight against the ongoing environmental crisis. Given the ongoing degradation of earth ecosystems and the related crucial services that they offer to the human society, ecosystem restoration has become a major society-critical issue. It is required to develop rigorously software applications managing ecosystem restoration. Reliable models of ecosystems and restoration goals are necessary. This paper proposes a rigorous approach for ecosystem requirements modeling using formal methods from a model-driven software engineering point of view. The authors describe the main concepts at stake with a metamodel in UML and introduce a formalization of this metamodel in Alloy. The formal model is executed with Alloy Analyzer, and safety and liveness properties are checked against it. This approach helps ensuring that ecosystem specifications are reliable and that the specified ecosystem meets the desired restoration goals, seen in our approach as liveness and safety properties. The concepts and activities of the approach are illustrated with CRESTO, a real-world running example of a restored Costa Rican ecosystem.
Language and Formal Methods, Formal Software Engineering, Requirements Engineering, Ecosystem Restoration Modeling, Alloy, UML.
Ibrahim Yakubu, Department of Quantity Surveying, Faculty of Environmental Technology, Abubakar Tafawa Balewa University, Bauchi, Bauchi State, Nigeria
The study is aimed at proposing the concept of Fuzzy Decision Variables in determining the conditions that give rise to design risk in construction. A structured analysis technique is utilized in determining the Fuzzy Decision Variables that give rise to design risks. A fuzzy set analysis is used to estimate the magnitudes of the variables to obtain the total magnitude of the design risk. The variables of inadequate strategic briefing, inadequate concept briefing, inadequate detailed briefing, inadequate specialist consultants’ designs, inadequate architectural services designs, and inadequate building designs were deduced to have caused the design with a significant increase in construction cost. The concept of Fuzzy Decision Variables could be used to predict the occurrence of risk and its likely magnitude. It is recommended that briefing and design should be optimally concluded to minimize design risk.
Fuzzy Decision Variables, risk, design, briefing, magnitude.
Faria Tabassum1, Md. Rahatul Islam
With the gradual increase of program complexity for utilities and various appliances,memory management has always been a challenge for computer scientists from the beginning of the technological revolution. Garbage collection is a very effective memory management system that comprises techniques that aim to claim unused objects which in turn returns reusable memory space for further use. Pure reference counting is a popular technique for garbage collection that actively tracks down any unused object reference by means of keeping reference counts. It offers better reliability compared with other passive techniques. But this process cannot track cyclic garbage references. In order to solve this cycle detection issue along with other various improvements available for other techniques different methods are proposed and developed. Some of these optimizations look for loaning concepts from tracing to improve tracking down performance while others add up cycle detection from various existing techniques to gain the pros of both techniques or others simply emphasize on memory allocation for reducing time complexity overhead. We aim at exploring those different methodologies and present a comparative understanding of these optimizations to achieve a conclusion of the pure reference counting garbage collection.
pure reference counting, tracing, garbage collection, cycle detection.
Riccardo Porcedda, Universita degli Studi di Milano-Bicocca, Milan, Italy
The field of eXplainable Artificial Intelligence faces challenges due to the absence of a widely accepted taxonomy that facilitates the quantitative evaluation of explainability in Machine Learning algorithms. In this paper, we propose a novel taxonomy that addresses the current gap in the literature by providing a clear and unambiguous understanding of the key concepts and relationships in XAI. Our approach is rooted in a systematic analysis of existing definitions and frameworks, with a focus on transparency, interpretability, completeness, complexity and understandability as essential dimensions of explainability. This comprehensive taxonomy aims to establish a shared vocabulary for future research. To demonstrate the utility of our proposed taxonomy, we examine a case study of a Recommender System designed to curate and recommend the most suitable online resources from MERLOT. By employing the SHAP package, we quantify and enhance the explainability of the RS within the context of our newly developed taxonomy.
XAI, Explainability, Interpretability, Recommender Systems, Machine Learning, Education, SHAP.
Sanaz Rasti1, Sarah Dunne2 and Eugenia Siapera2, 1School of Computer Science, University College Dublin, Dublin, Ireland, 2School of Information and Communication Studies, University College Dublin, Dublin, Ireland
The rapid growth of Alt-tech platforms and concerns over their less stringent content moderation policies, make them a good case for opinion mining. This study aims at investigating the topic models that exist in specific Alt-tech channel on Telegram, using data collected in two time points of 2021 and 2023. Three different topic models of LDA, NMF and Contextualized NTM were explored and a model selection procedure was proposed to choose the best performing model among all. To validate the model selection algorithm quantitatively and qualitatively, the approach was tested on publicly available labelled datasets. For all the experiments, data was pre-processed employing an effective NLP pre-processing procedure along with an Alt-tech customised list of stop-words. Using the validated topic model selection algorithm, LDA topics with Ngram range = (4, 4) were extracted from the targeted Alt-tech dataset. The findings from topic models were qualitatively evaluated by a social scientist and are further discussed. The conclusion of the work suggests that the proposed model selection procedure is effective for corresponding corpus length and context. Future work avenues are suggested to improve the Alt-tech topic modeling outcome.
Topic Modeling, Topic Model Selection, LDA, NMF, Contextualized NTM , Alt-tech.
Jing Ao, Kara Schatz, and Rada Chirkova, Department of Computer Science, North Carolina State University, Raleigh, North Carolina, USA
Locating unusual temporal trends in data cubes is a recurrent task in a variety of application domains. We consider a version of this problem in which one looks for the data dimensions that are best correlated with the given unusual temporal trends. Our goal is to make such data-cube navigation in search of unusual temporal trends both effective and efficient. Challenges in achieving this goal arise from the rarity of the trends to be located, as well as from the combinatorics involved in locating in data cubes nodes with unusual trends. We show that exhaustive solutions are worst-case intractable, and introduce tractable heuristic algorithms that enable effective and efficient data-cube navigation in a particular manner that we call trend surfing. We report the results of testing the proposed algorithms on three real-life data sets; these results showcase the effectiveness and efficiency of the algorithms against the exhaustive baseline.
Unusual temporal trends and associated dimensions, effective and efficient navigation in the data cube, criteria for trend unusualness.