John Magee, Stephen Sheridan, and Christina Thorpe, School of Informatics and Cybersecurity, Technological University Dublin, Ireland
The proliferation of mobile devices allows financial institutions to offer remote customer services, such as remote account opening. Manipulation of identity documents using image processing software is a low-cost, high-risk threat to modern financial systems, opening these institutions to fraud through crimes related to identity theft. In this paper we describe our exploratory research into the application of biomedical image algorithms to the domain of document recapture detection. We perform a statistical analysis to compare different types of recaptured documents and train a support vector machine classifier on the raw histogram data generated using the Meijering filter. The results show that there is potential in biomedical imaging algorithms, such as the Meijering filter, as a form of texture analysis that help identify recaptured documents.
Identity documents, document recapture detection, Meijering filter.
Benjamin Caillet1, Steve Dev`enes1, Gilbert Maˆıtre1, Darren Hight 2, Alessandro Mirra2, Olivier L Levionnois2, and Alena Simalatsar1, 1Institute of System Engineering, HES-SO Valais-Wallis, Sion, Switzerland, 2Vetsuisse Faculty, University of Bern, Switzerland, 3Inselspital – University Hospital of Bern, Switzerland
The electroencephalogram (EEG) is a collection of signals that represent the electrical activity of brain cells. Its analysis, widely used in the health, medical, and neuroscience domains, aims to capture changes in the pattern of waveforms. EEG signals are collected using small electrodes attached to the scalp. One well-known application of EEG signal analysis is general anaesthesia, which allows for anaesthesia individualization based on objective measures of EEG signals correlated with the Depth of Anaesthesia (DoA). Well-known examples of EEG-based DoA indices are the Bispectral Index (BIS) and the Patient State Index (PSI). However, it has been shown that the algorithms that compute these DoA indices are not substance specific and since they are proprietary it is impossible to improve them. The problem is even greater for the veterinary domain, because all existing algorithms are developed for humans and, therefore, they cannot be applied or adapted to veterinary practice. Hence, one must start from scratch and develop their signal processing code for EEG analysis. Moreover, signal processing algorithms are often well-known only to engineers and can be hard to understand for medical or veterinary staff. Nonetheless, they are the key people to define the ”signature of DoA” - signal features that are most relevant to DoA. In this paper, we present our General Anaesthesia Matlab-based Graphical User Interface (GAM-GUI) tool aimed at simplifying interdisciplinary communication in the process of developing novel DoA index formulation.
QunFang, YiHuiYan and GuoQingMa, School of Computer and Information, Anhui Normal University, WuHui, China
Gesture recognition is one of the key technologies in the field of intelligent education, and it utilizes millimeter-wave (mmWave) signals, which have the advantages of high resolution, strong penetration and environmental protection.In this paper, a contactless gesture recognition method based on millimeter wave radar is proposed. It uses a millimeter-wave radar module to capture the raw signals of hand movements. The received radar raw signals are then preprocessed, including Fourier transform, range compression, Doppler processing, and moving target indication (MTI) noise reduction, to generate a reflection intensity-distance-Doppler (RFDM) image. Then, The convolutional neural network-temporal convolutional network (CNN-TCN) model is employed to extract spatiotemporal features and evaluate its recognition performance through classification. The experimental results demonstrate that our method has achieved 98.2% accuracy in the same-domain recognition, and 96% and 94% accuracy in cross-domain recognition, indicating high recognition performance and robustness.
Gesture Recognition, Millimeter-Wave Radar, Spatiotemporal Features, Cross-Domain Recognition.
Tiago Sousa, Benoˆıt Ries, and Nicolas Guelfi, Department of Computer Science, University of Luxembourg, Esch-sur-Alzette, Luxembourg
United Nations have declared the current decade (2021-2030) as the ”UN Decade on Ecosystem Restoration” to join R&D forces to fight against the ongoing environmental crisis. Given the ongoing degradation of earth ecosystems and the related crucial services that they offer to the human society, ecosystem restoration has become a major society-critical issue. It is required to develop rigorously software applications managing ecosystem restoration. Reliable models of ecosystems and restoration goals are necessary. This paper proposes a rigorous approach for ecosystem requirements modeling using formal methods from a model-driven software engineering point of view. The authors describe the main concepts at stake with a metamodel in UML and introduce a formalization of this metamodel in Alloy. The formal model is executed with Alloy Analyzer, and safety and liveness properties are checked against it. This approach helps ensuring that ecosystem specifications are reliable and that the specified ecosystem meets the desired restoration goals, seen in our approach as liveness and safety properties. The concepts and activities of the approach are illustrated with CRESTO, a real-world running example of a restored Costa Rican ecosystem.
Language and Formal Methods, Formal Software Engineering, Requirements Engineering, Ecosystem Restoration Modeling, Alloy, UML.
Ibrahim Yakubu, Department of Quantity Surveying, Faculty of Environmental Technology, Abubakar Tafawa Balewa University, Bauchi, Bauchi State, Nigeria
The study is aimed at proposing the concept of Fuzzy Decision Variables in determining the conditions that give rise to design risk in construction. A structured analysis technique is utilized in determining the Fuzzy Decision Variables that give rise to design risks. A fuzzy set analysis is used to estimate the magnitudes of the variables to obtain the total magnitude of the design risk. The variables of inadequate strategic briefing, inadequate concept briefing, inadequate detailed briefing, inadequate specialist consultants’ designs, inadequate architectural services designs, and inadequate building designs were deduced to have caused the design with a significant increase in construction cost. The concept of Fuzzy Decision Variables could be used to predict the occurrence of risk and its likely magnitude. It is recommended that briefing and design should be optimally concluded to minimize design risk.
Fuzzy Decision Variables, risk, design, briefing, magnitude.
Faria Tabassum1, Md. Rahatul Islam
With the gradual increase of program complexity for utilities and various appliances,memory management has always been a challenge for computer scientists from the beginning of the technological revolution. Garbage collection is a very effective memory management system that comprises techniques that aim to claim unused objects which in turn returns reusable memory space for further use. Pure reference counting is a popular technique for garbage collection that actively tracks down any unused object reference by means of keeping reference counts. It offers better reliability compared with other passive techniques. But this process cannot track cyclic garbage references. In order to solve this cycle detection issue along with other various improvements available for other techniques different methods are proposed and developed. Some of these optimizations look for loaning concepts from tracing to improve tracking down performance while others add up cycle detection from various existing techniques to gain the pros of both techniques or others simply emphasize on memory allocation for reducing time complexity overhead. We aim at exploring those different methodologies and present a comparative understanding of these optimizations to achieve a conclusion of the pure reference counting garbage collection.
pure reference counting, tracing, garbage collection, cycle detection.
Riccardo Porcedda, Universita degli Studi di Milano-Bicocca, Milan, Italy
The field of eXplainable Artificial Intelligence faces challenges due to the absence of a widely accepted taxonomy that facilitates the quantitative evaluation of explainability in Machine Learning algorithms. In this paper, we propose a novel taxonomy that addresses the current gap in the literature by providing a clear and unambiguous understanding of the key concepts and relationships in XAI. Our approach is rooted in a systematic analysis of existing definitions and frameworks, with a focus on transparency, interpretability, completeness, complexity and understandability as essential dimensions of explainability. This comprehensive taxonomy aims to establish a shared vocabulary for future research. To demonstrate the utility of our proposed taxonomy, we examine a case study of a Recommender System designed to curate and recommend the most suitable online resources from MERLOT. By employing the SHAP package, we quantify and enhance the explainability of the RS within the context of our newly developed taxonomy.
XAI, Explainability, Interpretability, Recommender Systems, Machine Learning, Education, SHAP.
Tharindu De Silva and Dileeka Sandamali Alwis, School of Computing, Informatics Institute of Technology, Colombo 06, Sri Lanka
Dialogue summarization refers to the process of condensing and extracting relevant information from written or spoken conversations between two or more individuals. In modern applications, such as customer service, where vast amounts of dialogue data are generated daily, the need for summarization arises. The goal of dialogue summarization is to capture the essential details of a conversation, allowing readers to quickly grasp the main points without reviewing the entire dialogue. For customer service specifically, support agents often need to provide a concise summary of the conversation for future reference, which can be a time-consuming task requiring human resources. To address this challenge, a system called "MultiDialogSum" is being developed. It aims to generate dialogue summaries for customer service interactions and supports multiple languages. This system leverages recent advancements in cross- lingual transfer models and machine translation techniques to achieve its capabilities.
Dialogue Summarization, Natural language processing, Cross-lingual transfer modals, Machine Learning
Pramod Sambhaji Patil1, Dr. Pawar Sudhakar Bhika2 and Dr. Mujahid Husain3, 1R.C.Patel Institute of Technology, Shirpur Department of Civil Engineering, 2&3 Shram Sadhana Bombay Trust runs the COLLEGE of ENGINEERING & TECHNOLOGY at Bambhori, Jalgaon
Tapi is a major river in North Maharashtra that is primarily used for drinking water irrigation and industrial purposes. It flows through three major states, with Gujarat having the largest catchment area. In the middle of its session, there is an established relationship between one associated plant and the Tapi River, which is polluted by agricultural, domestic, and industrial waste. It has several small and medium sized cities located along the river's main stem and tributaries. Because of this, the river takes in a lot of organic waste from upstream. The amount of harmful industrial discharge is very lesser, but agricultural discharge is most commonly collected from it's own catchment area; such activities are most common in the river's final 45 km stretch (Chopda to Shirpur) The primary objective of this research is to investigate the pollutant range at each intake point and the causes of Tapi Water contamination. Samples were collected from 9 multiple places, each 5 kilometres apart. Both steady and flowing test conditions were run for pH, turbidity, DO, COD, BOD, chloride, fluoride, coliform, and heavy metals. These boundaries were chosen dependent on drinking water quality norms for the country.
Water quality, industrial discharge as well as agricultural practices.
Raghav Subramaniam, Independent Researcher, USA
With the growing importance of computer science education globally, we look at possibilities for modifying computer science education to be more intuitive and accessible. To do this we will investigate the implementation of music specifically into computer science education to observe if a better framework for curriculum development can be established for the benefit of learners. Although multiple challenges were found such as diverse student backgrounds, possible lack of educator proficiency, and assessment complexity, if these challenges are addressed this interdisciplinary approach could offer a more optimal learning experience for students, supplementing traditional coding education with new lenses.
Computer Science, CS Education, Music, Music Education, Programming Languages.
Sanaz Rasti1, Sarah Dunne2 and Eugenia Siapera2, 1School of Computer Science, University College Dublin, Dublin, Ireland, 2School of Information and Communication Studies, University College Dublin, Dublin, Ireland
The rapid growth of Alt-tech platforms and concerns over their less stringent content moderation policies, make them a good case for opinion mining. This study aims at investigating the topic models that exist in specific Alt-tech channel on Telegram, using data collected in two time points of 2021 and 2023. Three different topic models of LDA, NMF and Contextualized NTM were explored and a model selection procedure was proposed to choose the best performing model among all. To validate the model selection algorithm quantitatively and qualitatively, the approach was tested on publicly available labelled datasets. For all the experiments, data was pre-processed employing an effective NLP pre-processing procedure along with an Alt-tech customised list of stop-words. Using the validated topic model selection algorithm, LDA topics with Ngram range = (4, 4) were extracted from the targeted Alt-tech dataset. The findings from topic models were qualitatively evaluated by a social scientist and are further discussed. The conclusion of the work suggests that the proposed model selection procedure is effective for corresponding corpus length and context. Future work avenues are suggested to improve the Alt-tech topic modeling outcome.
Topic Modeling, Topic Model Selection, LDA, NMF, Contextualized NTM , Alt-tech.
Abdul Rehman, Xiaosong Yang, and Kavisha Jayathunge, Faculty of Media and Communications, Bournemouth University, Bournemouth, United Kingdom
Speech intonations are implied by subtle differences in syllables and convey ambiguous meanings, thus making it difficult for machines to interpret them. In this work, we assume that the intonations for emotions or interrogative statements have regular underlying prosodic patterns therefore if an unsupervised intonation template dictionary is created then the similarity with certain templates can be used as an encoding mechanism for certain higher-level labels. We use piecewise interpolation of syllable-level formant features to create templates of intonations. We experimented on 3 datasets for speech emotion recognition and on a set of declarative-interrogative utterances to evaluate the affinity of intonation templates with paralingual labels. The results show that basic emotions can be detected for individual syllables with almost double the accuracy than chance. Moreover, certain intonation templates were found to have a significant correlation with interrogative implications.
speech processing, emotion recognition, computational paralinguistics.
Parisa Safikhani and David Broneske, Department of Research Infrastructure and Methods, DZHW, Hannover, Germany
Recent advancements in Automated Machine Learning (AutoML) have led to the emergence of Automated Natural Language Processing (AutoNLP), a subfield focused on automating NLP model development. Existing NLP toolkits provide various tools and modules but lack a free AutoNLP version. To this end, architecting the design decisions and tuning knobs of AutoNLP is still essential for enhancing performance in various industries and applications. Therefore, analyzing how different text representation methods affect the performance of AutoML systems is an essential starting point for investigating AutoNLP. In this paper, we present a comprehensive study on the performance of AutoPyTorch, an open-source AutoML framework with various text representation methods for binary text classification tasks. The novelty of our research lies in investigating the impact of different text representation methods on AutoPyTorch’s performance, which is an essential step toward transforming AutoPyTorch to also support AutoNLP tasks. We conduct experiments on five diverse datasets to evaluate the performance of both contextual and noncontextual text representation methods, including one-hot encoding, BERT (base uncased), fine-tuned BERT, LSA, and a method with no explicit text representation. Our results reveal that, depending on the tasks, different text representation methods may be the most suitable for extracting features to build a model with AutoPyTorch. Furthermore, the results indicate that fine-tuned BERT models consistently outperform other text representation methods across all tasks. However, during the fine-tuning process, the fine-tuned model had the advantage of benefiting from labels. Hence, these findings support the notion that integrating fine-tuned models or a model fine-tuned on open source large dataset, including all binary text classification tasks as text representation methods in AutoPyTorch, is a reasonable step toward developing AutoPyTorch for NLP tasks.
Automated Machine Learning (AutoML), Automated Natural Language Processing (AutoNLP), Contextual- and non-contextual text representation, AutoPyTorch, Binary Text Classification, One-hot encoding, BERT, Fine-tuned BERT, Latent Semantic Analysis (LSA).
Olga Simek and Courtland VanDam, MIT Lincoln Laboratory Lexington, MA, USA
Arabic named entity recognition (NER) is a challenging problem, especially in conversational data such as social media posts. To address this problem, we propose an Arabic weak learner NER model called ANER-HMM, which leverages low quality predictions that provide partial recognition of entities. By combining these predictions, we achieve state of the art NER accuracy for cases for out-of-domain predictions. ANER-HMM leverages a hidden markov model to combine multiple predictions from weak learners and gazetteers. We demonstrate that ANER-HMM outperforms the state of the art Arabic NER methods without requiring any labeled data or training deep learning models which often require large computing resources.
Weak supervision, named entity recognition, Arabic, Twitter.
Stine Nyhus Larsen and Rebekah Baglini, Center for Humanities Computing, Aarhus University Aarhus, Denmark
This research proposes a novel method to interpret narratives from vast text corpora, notably during significant events like the COVID pandemic. Utilizing a framework inspired by Tangherlini and Shahsavari, this study presents an evolving pipeline to visualize narratives in Danish text using network graphs, leveraging Natural Language Processing (NLP) tools. The approach, while still under refinement, was successfully tested on two case studies. Preliminary results show promising narrative representations, even with minimal dataset curation. However, challenges like coreference resolution and semantic triplet extraction remain. The thesis also hints at potential enhancements and future applications of the pipeline for comprehensive narrative analysis.
NLP, knowledge graphs, co-reference resolution, NER, Danish.
Anusuya Krishnanl1 and Kennedyraj2, 1College of IT, UAE University, Al Ain, UAE, 2College of IT, Noorul Islam University, Kanyakumari, India
The exponential growth of online social network platforms and applications has led to a staggering volume of user-generated textual content, including comments and reviews. Consequently, users often face difficulties in extracting valuable insights or relevant information from such content. To address this challenge, machine learning and natural language processing algorithms have been deployed to analyze the vast amount of textual data available online. In recent years, topic modeling techniques have gained significant popularity in this domain. In this study, we comprehensively examine and compare five frequently used topic modeling methods specifically applied to customer reviews. The methods under investigation are latent semantic analysis (LSA), latent Dirichlet allocation (LDA), non-negative matrix factorization (NMF), pachinko allocation model (PAM), Top2Vec, and BERTopic. By practically demonstrating their benefits in detecting important topics, we aim to highlight their efficacy in real-world scenarios. To evaluate the performance of these topic modeling methods, we carefully select two textual datasets. The evaluation is based on standard statistical evaluation metrics such as topic coherence score. Our findings reveal that BERTopic consistently yield more meaningful extracted topics and achieve favorable results.
Natural Language Processing, Topic Modeling & Customer Reviews.
Jiamin Lu and Chenguang Xue, Key Laboratory of Water Big Data Technology of Ministry of Water Resources, Hohai University, Nanjing, China
Recent span-based joint extraction models have demonstrated significant advantages in both entity recognition and relation extraction. These models treat text spans as candidate entities, and span pairs as candidate relationship tuples, achieving state-of-the-art results on datasets like ADE. However, these models encounter a significant number of non-entity spans or irrelevant span pairs during the tasks, impairing model performance significantly. To address this issue, this paper introduces a span-based multitask entity-relation joint extraction model. This approach employs the multitask learning to alleviate the impact of negative samples on entity and relation classifiers. Additionally, we leverage the IoU concept to introduce the positional information into the entity classifier, achieving a span boundary detection. Furthermore, by incorporating the entity Logits predicted by the entity classifier into the embedded representation of entity pairs, the semantic input for the relation classifier is enriche. Experimental results demonstrate that our proposed SpERT.MT model can effectively mitigate the adverse effects of excessive negative samples on the model performance. Furthermore, the model exhibits commendable performance on three widely-used public datasets: CoNLL04, SciERC, and ADE.
Natural language processing, Joint entity and relation extraction, Span-based model, Multitask learning, Negative samples.
Jiamin.Lu and Shitao. Wang, Key Laboratory of Water Big Data Technology of Ministry of Water Resources Hohai University, Nanjing, Chinak
Across various domains, data from different sources such as Baidu Baike and Wikipedia often manifest in distinct forms. Current entity matching methodologies predominantly focus on homogeneous data, characterized by attributes that share the same structure and concise attribute values. However, this orientation poses challenges in handling data with diverse formats. Moreover, prevailing approaches aggregate the similarity of attribute values between corresponding attributes to ascertain entity similarity. Yet, they often overlook the intricate interrelationships between attributes, where one attribute may have multiple associations. The simplistic approach of pairwise attribute comparison fails to harness the wealth of information encapsulated within entities.To address these challenges, we introduce a novel entity matching model, dubbed ”Entity Matching Model for Capturing Complex Attribute Relationships (EMM-CCAR),” built upon pre-trained models. Specifically, this model transforms the matching task into a sequence matching problem to mitigate the impact of varying data formats. Moreover, by introducing attention mechanisms, it identifies complex relationships between attributes, emphasizing the degree of matching among multiple attributes rather than one-to-one correspondences. Through the integration of the EMM-CCAR model, we adeptly surmount the challenges posed by data heterogeneity and intricate attribute interdependencies. In comparison with the prevalent DER-SSM and Ditto approaches, our model achieves improvements of approximately 4% and 1% in F1 scores, respectively. This furnishes a robust solution for addressing the intricacies of attribute complexity in entity matching.
Entity Matching, Attribute Comparision, Attention, Pre-trained Model.
H. Avetisyan1 and D. Broneske2, 1Research Area Research Infrastructure and Methods, 2The German Centre for Higher Education Research and Science Studies (DZHW)
Context: Language models’ growing role in natural language processing necessitates a deeper understanding of their linguistic knowledge. Linguistic probing tasks have become crucial for model explainability, designed to evaluate models’ understanding of various linguistic phenomena. Objective: This systematic review critically assesses the linguistic knowledge of language models via linguistic probing, providing a comprehensive overview of the understood linguistic phenomena and identifying future research areas. Method: We performed an extensive search of relevant academic databases and analyzed 57 articles published between October 2018 and October 2022. Results: While language models exhibit extensive linguistic knowledge, limitations persist in their comprehension of specific phenomena. The review also points to a need for consensus on evaluating language models’ linguistic knowledge and the linguistic terminology used. Conclusion: Our review offers an extensive look into linguistic knowledge of language models through linguistic probing tasks.This study underscores the importance of understanding these models’ linguistic capabilities for effective use in NLP applications and for fostering more explainable AI systems.
LLMs, linguistic knowledge, probing, analysis of LMs.
Jing Ao, Kara Schatz, and Rada Chirkova, Department of Computer Science, North Carolina State University, Raleigh, North Carolina, USA
Locating unusual temporal trends in data cubes is a recurrent task in a variety of application domains. We consider a version of this problem in which one looks for the data dimensions that are best correlated with the given unusual temporal trends. Our goal is to make such data-cube navigation in search of unusual temporal trends both effective and efficient. Challenges in achieving this goal arise from the rarity of the trends to be located, as well as from the combinatorics involved in locating in data cubes nodes with unusual trends. We show that exhaustive solutions are worst-case intractable, and introduce tractable heuristic algorithms that enable effective and efficient data-cube navigation in a particular manner that we call trend surfing. We report the results of testing the proposed algorithms on three real-life data sets; these results showcase the effectiveness and efficiency of the algorithms against the exhaustive baseline.
Unusual temporal trends and associated dimensions, effective and efficient navigation in the data cube, criteria for trend unusualness.
Elissa Nadia Madi, Azwa Aziz, and Binyamin Yusof, Faculty of Informatics and Computing, Universiti Sultan Zainal Abidin (UniSZA), Besut Campus, Terengganu, Malaysia
The problem of reasoning under uncertainty is widely recognised as significant in information technology, and a wide range of methods has been proposed to address this problem. Uncertainty happens when imperfect information is the only available source to solve it using quantitative methods. Therefore, there is a need to implement a qualitative method when no numerical information is available. Linguistic uncertainties related to the qualitative part must be considered and managed wisely. Such uncertainty commonly involves decision-making as the problem depends on human perceptions. This study explores the relationship and difference between two variables, namely the level of uncertainty to the input and the output changes based on multi-criteria decision analysis. There is a positive relationship between these two variables. The novel generation interval type-2 fuzzy membership function technique is proposed based on this. It can accurately map the decision maker’s perceptions to the fuzzy set model, reducing the potential for loss of information. In literature, the output ranking of the system is presented as a crisp number. However, this study proposed a new form of output in interval form based on multi-criteria decision analysis. Overall, this study provides new insight into how we should not ignore uncertainty when it affects the input. It provides an intelligent way to map human perceptions to the system using a fuzzy set.
Fuzzy Number, Interval Type-2 fuzzy set, TOPSIS.
Atrin Barzegar, Yas Barzegar, Department of Management, Sapienza University of Rome, 00161 Rome, Italy
Drinking water quality assessment is a major issue today; technology and practices are continuously improving; Artificial Intelligence (AI) methods prove their efficiency in this domain. The current research seeks a hierarchical fuzzy model for predicting drinking water quality in Rome (Italy). The Mamdani fuzzy inference system (FIS) is applied with different defuzzification methods. The Proposed Model includes three fuzzy intermediate models and one fuzzy final model. Each fuzzy model consists of three input parameters and 27 fuzzy rules. The model is developed for water quality assessment with a dataset considering nine parameters (Alkalinity, Hardness, pH, Ca, Mg, Fluoride, Sulphate, Nitrates, and Iron). Fuzzy-logic-based methods have been demonstrated to be appropriate to address uncertainty and subjectivity in drinking water quality assessment; it is an effective method for managing complicated, uncertain water systems and predicting drinking water quality. The FIS method can provide an effective solution to complex systems; this method can be modified easily to improve performance.
Water quality, smart cities, fuzzy logic, fuzzy inference systems, membership functions, water attribute.
Zhefu Song1, Zhihua Wang2, Zhaoyang Fu3, Wu Chao4, 1Zhejiang University, 2Shanghai Institute for Advanced Study of Zhejiang University, 3Suzhou University of Science and Technology, 4Zhejiang University
With the continuous development of artificial intelligence, its impact on peoples lives is becoming increasingly significant. Countries, universities, and students have increased their emphasis on learning artificial intelligence knowledge. This article designs and proposes a teaching mode called mo-tutor, which achieves the goal of learning artificial intelligence (AI) knowledge through video playback, segmented speech explanations, picture and text sketching, and online coding. This mode effectively coordinates the AI knowledge system and reduces the barrier for students to learn AI.
artificial intelligence, online education, education, talent cultivation, online learning.
Sina Shekari, Department of Electrical Engineering, Ferdowsi University of Mashhad, Iran
In this study, speed sensorless fuzzy control of an Axial Flux Permanent Magnet Synchronous Motor (AFPMSM) using an advanced flux concept at very low speeds (close to zero) is investigated. Due to higher torque in low speeds and higher efficiency, Axial Flux Motors (AFMs) have more applications than Radial Flux Motors (RFMs) including spacecraft, electric vehicles, direct driven screw propeller, blowers, etc.High prices and the need formaintenance that reduces the system reliability, limit the use of sensors in drive systems. For this reason, several methods have been proposed to estimate the motor speed and position. The method presented in this paper, uses extended rotor flux concept to estimate the motor speed.From mathematical point of view, this concept converts anInteriorPermanent Magnet (IPM) motor model to aSurface-mountedPermanent Magnet (SPM) one that simplifies the computations.Furthermore, inthis papera fuzzy PI control has been used that offers better results than the classic PI control.
axial flux synchronous motor, extended rotor flux, Fuzzy PI controller, speed Estimation.
Wenqiang Song, Chuan He, Zhaoyang Xie and Yuanyuan Chai, JIT Research Insititute, Beijing, China
The continuous development of computer network technology has accelerated the pace of informatization, and at the same time, network security issues are becoming increasingly prominent. Networking technology with different network topologies is one of the important means to solve network security problems. The security of VPN is based on the division of geographical boundaries, but the granularity is relatively coarse, which is difficult to cope with the dynamic changes of the security situation. Zero trust network solves the VPN problem through peer to peer authorization and continuous verification, but most of the solutions use a central proxy device, resulting in the central node becoming the bottleneck of the network. This paper put forward the hard-Nat traversal formula based on the birthday paradox, which solves the long-standing problem of hard NAT traversal. Based on this, a full mesh networking technology based on the variable parameter full dimensional spatial peer-to-peer grid topology was proposed, which covers all types of networking schemes, and realizes peer to peer resource interconnection for both the methodological level and the engineering level.
Zero trust, Birthday paradox, hard NAT, port scanning, NAT traversal, full mesh networking technology.
Serkan Macit and Prof.Dr. B. Berk Üstündağ, Department of Computer Engineering, Istanbul Technical University, İstanbul, Turkey
Machine learning (ML) algorithms have garnered considerable attention and recognition in the context of addressing time series prediction challenges. However, constructing an ML model with optimal architecture and hyperparameters that effectively captures the intricacies inherent in the time series data can present challenges. If the data encompasses multivariable characteristics with chaotic or stochastic properties and has missing parts, the task of constructing an accurate machine learning (ML) model with appropriate structure and hyperparameters becomes further challenging. Addressing the challenge of overfitting, which is a common machine learning problem encountered in such cases where the data exhibits these characteristics, a cascade neural network, named PECNET, is highly favorable alternative to mitigate this issue. PECNET addresses the problem by training separate neural networks for different frequency bands and types of input data, utilizing the remaining errors of these networks as the target labels. This approach enhances the orthogonality of data characteristics across time windows and subsequently reduce the likelihood of overfitting as additional networks are added, thereby improving prediction performance. In this study, to tackle the escalation in computational complexity and preempt the occurrence of implementation errors in time synchronization management, the previously experimentally tested PECNET was transformed into a modular and parametric framework software, considering the prevalent utilization of off-the-shelf framework softwares in the majority of artificial intelligence studies. The developed framework software was subsequently assessed for earthquake prediction using chaotic time series data of the Electrostatic Rock Stress (ERS) monitoring method developed at ITU.
* Predictive Error Compensated Wavelet Neural Networks
time series prediction, neural network, deep learning, data fusion, framework design, discrete wavelet transformation, earthquake prediction.
Behruz Saidov and Vladimir Telezhkin, South Ural State University (National Research University), Chelyabinsk, Russia
This article explores the methods of information processing in automated control systems based on ultrasonic transceivers. Modern automated control systems (ACS), including those for special purposes, widely use information processing methods that use digital technologies and network data exchange between various sensors. Recently, the ultrasonic sensor has been widely used in various applications, in particular, in transmitting and receiving information. The advantage of these systems (ultrasonic sensor) is, on the one hand, the possibility of communication, both with close and remote access, on the other hand, a high probability of leak detection and elimination of transmitted information. Ultrasonic transducers and ultrasonic sensors are devices that generate or receive ultrasonic energy. They can be divided into three broad categories: transmitters, receivers and transceivers. Transmitters convert electrical signals to ultrasound, receivers convert ultrasound to electrical signals, and transceivers can both transmit and receive ultrasound. The purpose of this work is to process information in automated control systems using an ultrasonic sensor. To solve this problem, an experiment was conducted to study transceivers at different distances. According to the results of the pilot study, it was concluded that it is possible to transmit and receive an ultrasonic sensor effectively at a frequency of 20 kHz at a distance of about 10 meters.
information processing, ultrasonic signal, automated control systems