Skip to main content
  • add
  • none edit
  • International Journal of Electrical and Computer Engineering (IJECE), ISSN 2088-8708, e-ISSN 2722-2578 is an official... more edit
Imbalanced datasets pose a significant challenge in credit card fraud detection, hindering the training effectiveness of models due to the scarcity of fraudulent cases. This study addresses the critical problem of data imbalance through... more
Imbalanced datasets pose a significant challenge in credit card fraud detection, hindering the training effectiveness of models due to the scarcity of fraudulent cases. This study addresses the critical problem of data imbalance through an in-depth exploration of techniques, including cross-entropy loss minimization, weighted optimization, and synthetic minority oversampling technique-based resampling, coupled with deep neural networks (DNNs). The urgent need to combat class imbalances in credit card fraud datasets is underscored, emphasizing the creation of reliable detection models. The research method delves into the application of DNNs, strategically optimizing and resampling the dataset to enhance model performance. The study employs a dataset from October 2018, containing 284,807 transactions, with a mere 492 classified as fraudulent. Various resampling techniques, such as undersampling and SMOTE oversampling, are evaluated alongside weighted optimization. The results showcase the effectiveness of SMOTE oversampling, achieving an accuracy of 99.83% without any false negatives. The study concludes by advocating for flexible strategies, integrating cutting-edge machine learning methods, and developing adaptive defenses to safeguard against emerging financial risks in credit card fraud detection.
Diabetes mellitus (DM) poses a significant health challenge globally, necessitating accurate and timely diagnosis for effective management. Conventional diagnostic methods often struggle to address the multifaceted nature of diabetes and... more
Diabetes mellitus (DM) poses a significant health challenge globally, necessitating accurate and timely diagnosis for effective management. Conventional diagnostic methods often struggle to address the multifaceted nature of diabetes and the requisite lifestyle adjustments. In this study, we propose a data-driven approach utilizing machine learning techniques to enhance diabetes diagnosis. By leveraging extensive patient attributes and medical records, machine learning algorithms can uncover intricate patterns and correlations. Our methodology, validated on the PIMA India dataset, demonstrates promising results. The random forest model achieved the highest accuracy of 87%, followed closely by gradient boost at 90%. Notably, XGBoost and CATBoost models attained a peak accuracy of 90.9%. These findings underscore the potential of machine learning in transforming diabetes diagnosis. Beyond improving diagnostic accuracy, our approach aims to guide individuals towards healthier lifestyles. Intelligent systems driven by machine learning hold promise for revolutionizing diabetes management, ultimately leading to better patient outcomes and more effective health care delivery.
Early-stage detection of chronic kidney disease (CKD) is crucial in research to enable timely intervention, enhance understanding of disease progression, reduce healthcare costs and support public health initiatives. The traditional... more
Early-stage detection of chronic kidney disease (CKD) is crucial in research to enable timely intervention, enhance understanding of disease progression, reduce healthcare costs and support public health initiatives. The traditional approaches on early-stage chronic kidney disease detection often suffer from slow convergence and not integrate advanced technologies, impacting their effectiveness. Additionally, security and privacy concerns related to patient data are ineffectively addressed. To overcome these issues, this research incorporates novel optimized artificial intelligence-based approaches. The main aim is to enhance detection process through enhanced hybrid mud ring network (EHMRN), a novel detection technique combining light gradient boosting machine and MobileNet, involving extensive data collection, including a large dataset of 100,000 instances. The introduced network is optimized through the mud ring optimization to attain enhanced performance. Incorporating spark ensures secure cloud-based storage, enhancing privacy and compliance with healthcare data regulations. This approach represents a significant advancement in primary stage detection more effectively and promptly. The results show that the introduced approach outperforms traditional approaches in terms of accuracy (99.96%), F1-score (99.91%), precision (100%), specificity (99.98%), recall (100%) and execution time (0.09s).
Sign language is the only means of communication for deaf and hearing-disabled people in their communities. It uses body language and gestures, such as hand shapes and facial expressions, to convey a message. It is important to note that... more
Sign language is the only means of communication for deaf and hearing-disabled
people in their communities. It uses body language and gestures, such as hand
shapes and facial expressions, to convey a message. It is important to note that
sign language is specific to the region; that is, Arabic sign language (ArSL) is
different from English sign language. Therefore, this research proposes a way to
improve the translation of ArSL using a new artificial intelligence (AI) architecture. Specifically, a convolutional neural network (CNN) based on fine-tuning of
the SSD-ResNet50 V1 FPN is applied to build a real-time ArSL recognition and
translation system with fast and accurate results. The proposed AI architecture
can provide translation of sign language in real-time to enhance communication
in the deaf community. We achieved an average F-score of 86% and an average
accuracy of 94%.
The purpose of this study is to determine the key socio-technical factors influencing big data analytics adoption in healthcare services. A systematic literature review was conducted using peer-reviewed scholarly publications spanning... more
The purpose of this study is to determine the key socio-technical factors influencing big data analytics adoption in healthcare services. A systematic literature review was conducted using peer-reviewed scholarly publications spanning from 2013 to 2023 to illuminate the influencing factors. Twelve papers focused on the factors influencing big data analytics (BDA) adoption in healthcare services were included for review. The factors were divided into four major groups namely i) person, ii) technology, iii) organization, and iv) environment. Analytical skills define a person, whereas technology is characterized by system quality and information quality. Organization support, organization resources, training, data governance, and evidencebased decision-making are all associated with the organization. Finally, government regulations are allocated to the environment. This review presents evidence of the socio-technical factors that influence big data analytics adoption in healthcare services. The findings from this review recommend future big data analytics adoption in healthcare services to carefully evaluate the factors identified in this study.
The internet of things (IoT) represents a rapidly expanding sector within computing, facilitating the interconnection of myriad smart devices autonomously. However, the complex interplay of IoT systems and their interdisciplinary nature... more
The internet of things (IoT) represents a rapidly expanding sector within computing, facilitating the interconnection of myriad smart devices autonomously. However, the complex interplay of IoT systems and their interdisciplinary nature has presented novel security concerns (e.g. privacy risks, device vulnerabilities, Botnets). In response, there has been a growing reliance on machine learning and deep learning methodologies to transition from conventional connectivitycentric IoT security paradigms to intelligence-driven security frameworks. This paper undertakes a comprehensive comparative analysis of recent advancements in the creation of IoT botnets. It introduces a novel taxonomy of attacks structured around the attack life-cycle, aiming to enhance the understanding and mitigation of IoT botnet threats. Furthermore, the paper surveys contemporary techniques employed for early-stage detection of IoT botnets, with a primary emphasis on machine learning and deep learning approaches. This elucidates the current landscape of the issue, existing mitigation strategies, and potential avenues for future research. This is an open access article under the CC BY-SA license.
The immense growth of mobile networks leads to versatile applications and new demands. The improved concert, transferability, flexibility, and performance of innovative network services are applied in diversified fields. More unique... more
The immense growth of mobile networks leads to versatile applications and
new demands. The improved concert, transferability, flexibility, and
performance of innovative network services are applied in diversified fields.
More unique networking concepts are incorporated into state-of-the-art
mobile technologies to expand these dynamic features further. This paper
presents a novel system architecture of slicing and pairing networks with
intra-layer and inter-layer functionalities in 5th generation (5G) mobile
networks. The radio access network layer slices and the core network layer
slices are paired up using the network slicing pairing functionalities. The
physical network elements of such network slices will be logically assigned
entities called softwarization of the network. Such a novel system
architecture called network sliced softwarization of 5G mobile networks
(NSS-5G) has shown better performances in terms of end-to-end delay, total
throughput, and resource utilization when compared to traditional mobile
networks. Thus, effective resource management is achieved using NSS-5G.
This study will pave the way for future softwarization of heterogeneous
mobile applications.
Task scheduling in the edge computing environment poses significant challenges due to its inherent NP-hard nature. Several researchers concentrated on minimizing simple makespan, disregarding the reduction of the mean time to complete all... more
Task scheduling in the edge computing environment poses significant challenges due to its inherent NP-hard nature. Several researchers concentrated on minimizing simple makespan, disregarding the reduction of the mean time to complete all tasks, resulting in uneven distributions of mean completion times. To address this issue, this study proposes a novel mean makespan task scheduling strategy (MMTSS) to minimize simple and mean makespan. MMTSS optimizes the utilization of virtual machine capacity and uses the mean makespan optimization to minimize the processing time of tasks. In addition, it reduces imbalance by evenly distributing tasks among virtual machines, which makes it easier to schedule batches subsequently. Using genetic algorithm optimization, MMTSS effectively lowers processing time and mean makespan, offering a viable approach for effective task scheduling in the edge computing environment. The simulation results, obtained using cloudlets ranging from 500 to 2000, explicitly demonstrate the improved performance of our approach in terms of both simple and mean makespan metrics.
The internet of things (IoT) is an emerging technology that has taken great relevance in the current socioeconomic context, especially in the business environment, due to its ability to generate competitive advantages. Its adoption... more
The internet of things (IoT) is an emerging technology that has taken great relevance in the current socioeconomic context, especially in the business environment, due to its ability to generate competitive advantages. Its adoption presents challenges, such as understanding the value proposition, staff training, and ensuring connectivity and compatibility. In addition, it is crucial to establish the technological maturity of the IoT in enterprises to determine their current state and take steps to address these challenges. In this study, a bibliometric analysis of 431 articles from different scientific databases was performed using Bibliometrix and VOSviewer tools to determine the current state of the domain. The results indicate that the field is booming, with an annual growth rate of 22.58%. Its conceptual structure is composed of the IoT implemented in different contexts, in conjunction with the influence of sister technologies such as big data and blockchain, suggesting limited specificity in establishing the maturity of the enterprise IoT. Countries such as China and Brazil were found to be at the forefront in the area. A promising aspect is establishing standardized ways to measure technological maturity and provide guidelines for improving internet of things adoption.
This article introduces an advanced solution for anonymizing large-scale sensitive data, addressing the limitations of traditional approaches when applied to vast datasets. By leveraging the Spark distributed computing framework, we... more
This article introduces an advanced solution for anonymizing large-scale sensitive data, addressing the limitations of traditional approaches when applied to vast datasets. By leveraging the Spark distributed computing framework, we propose a method that parallelizes the data anonymization process, enhancing efficiency and scalability. Utilizing Spark's resilient distributed datasets (RDD), the approach integrates two primary operations, Map_RDD and ReduceByKey_RDD, to execute the anonymization tasks. Our comprehensive experimental evaluation demonstrates our solution's effectiveness and improved performance in preserving data privacy while balancing data utility and confidentiality. A significant contribution of our study is the development of a wide array of solutions for data owners, particularly notable for a 500 MB dataset at an anonymity level of K=100, where our methodology produces 832 unique solutions. This study also opens avenues for future research in applying different privacy models within the Spark ecosystem, such as l-diversity and t-closeness.
The classification of student performance involves categorizing students' performance using input data such as demographic information and examination results. However, our study introduces a novel approach by emphasizing students' online... more
The classification of student performance involves categorizing students' performance using input data such as demographic information and examination results. However, our study introduces a novel approach by emphasizing students' online learning activities as a rich data source. To avoid misinterpretation during the classification, we therefore presented a study comparing several feature selection (FS) methods combined with artificial neural network (ANN), for classifying students' performance based on their online learning activities. At first, we focused on tackling the issue of missing values by implementing data cleaning using variance threshold. feature selection techniques were implemented which encompass both filterbased (information gain, chi-square, Pearson correlation) and wrapper-based, sequential selection (forward and backward) techniques. In the classification stage, multi-layer perceptron (MLP) was used with the default hyperparameters and 5-fold cross-validation along with synthetic minority oversampling technique (SMOTE) were also applied to each method. We evaluated each feature selection method's performance using key metrics: accuracy, precision, recall, and F1-score. The outcomes highlighted information gain and sequential selection (forward and backward) as the topperforming methods, all achieving 100% accuracy. This research underscores the potential of leveraging online learning activities for robust student performance classification within the specified constraints.
An advanced network system (ANS) is characterized by extensive communication features that can support a sophisticated collaborative network structure. This is essential to hosting various forms of upcoming modernized and innovative... more
An advanced network system (ANS) is characterized by extensive communication features that can support a sophisticated collaborative network structure. This is essential to hosting various forms of upcoming modernized and innovative applications. Security is one of the rising concerns associated with ANS deployment. It is also noted that machine learning is one of the preferred cost-effective ways to optimize the security strength and address various ongoing security problems in ANS; however, it is still unknown about its overall effectivity scale. Hence, this paper contributes to a systematic review of existing variants of machine learning approaches to deal with threat identification in ANS. As ANS is a generalized form, this discussion considers the impact of existing machine learning approaches on its practical use cases. The paper also contributes towards critical gap analysis and highlights the study's potential learning outcome.
In order to enhance marketing efforts and improve the performance of marketing campaigns, the effectiveness of language generation models needs to be evaluated. This study examines the performance of large language models (LLMs), namely... more
In order to enhance marketing efforts and improve the performance of marketing campaigns, the effectiveness of language generation models needs to be evaluated. This study examines the performance of large language models (LLMs), namely GPT-3.5, PaLM 2, and bidirectional encoder representations from transformers (BERT), in generating email subjects for advertising campaigns. By comparing their results, the authors evaluate the efficacy of these models in enhancing marketing efforts. The objective is to explore how LLMs contribute to creating compelling email subject lines and improving opening rates and campaign performance, which gives us an insight into the impact of these models in digital marketing. In this paper, the authors first go over the different types of language models and the differences between them, before giving an overview of the most popular ones that will be used in the study, such as GPT-3.5, PaLM 2, and BERT. This study assesses the relevance, engagement, and uniqueness of GPT-3.5, PaLM 2, and BERT by training and fine-tuning them on marketing texts. The findings provide insights into the major positive impact of artificial intelligence (AI) on digital marketing, enabling informed decision-making for AI-driven email marketing strategies.
Plant diseases can severely impact crop yields, posing a major risk to worldwide food stability. Prompt and precise identification of these diseases is crucial for early intervention and efficient crop administration. This paper... more
Plant diseases can severely impact crop yields, posing a major risk to worldwide food stability. Prompt and precise identification of these diseases is crucial for early intervention and efficient crop administration. This paper introduces an innovative method for detecting plant leaf diseases using residual networks (ResNets) and the PlantVillage dataset. To develop light weight residual (LWR) architecture, five convolutional layers are interleaved with five max-pooling layers, making up the architecture of ten layers. The number of filters in the convolutional layers is gradually increased from 32 to 64 and up to 512 with a 3×3 kernel. A fully connected layer is the last layer of the network which provides the classification of leaf diseases The LWR architecture is trained and evaluated using the PlantVillage dataset, a broad collection of annotated images. This dataset serves as the basis for the system. The findings of the experiments provide evidence that the suggested system has higher accuracy, sensitivity, and specificity measures. The use of residual networks in LWR architecture improves the capability of the model to acquire complicated representations, which in turn enables a more precise differentiation between healthy and unhealthy plant leaves.
Hand gesture recognition emerges as one of the foremost sectors which has gone through several developments within pattern recognition. Numerous studies and research endeavors have explored methodologies grounded in computer vision within... more
Hand gesture recognition emerges as one of the foremost sectors which has
gone through several developments within pattern recognition. Numerous
studies and research endeavors have explored methodologies grounded in
computer vision within this domain. Despite extensive research endeavors,
there is still a need for a more thorough evaluation of the efficiency of
various methods in different environments along with the challenges
encountered during the application of these methods. The focal point of this
paper is the comparison of different research in the domain of vision-based
hand gesture recognition. The objective is to find out the most prominent
methods by reviewing efficiency. Concurrently, the paper delves into
presenting potential solutions for challenges faced in different research. A
comparative analysis particularly centered around traditional methods and
convolutional neural networks like random forest, long short-term memory
(LSTM), heatmap, and you only look once (YOLO). considering their
efficacy. Where convolutional neural network-based algorithms performed
best for recognizing the gestures and gave effective solutions for the
challenges faced by the researchers. In essence, the findings of this review
paper aim to contribute to future implementations and the discovery of more
efficient approaches in the gesture recognition sector.
Over the past few years, object detection has experienced remarkable advancements, primarily attributable to significant progress in deep learning architectures. Nonetheless, the task of identifying aircraft targets within remote sensing... more
Over the past few years, object detection has experienced remarkable advancements, primarily attributable to significant progress in deep learning architectures. Nonetheless, the task of identifying aircraft targets within remote sensing images remains a challenging and actively explored area. Presently, there are two main approaches employed for this task: one utilizing convolutional neural network (CNN) techniques and the other relying on conventional methods. In this work, a CNN based architecture is proposed to recognize aircraft types using remote sensing images. The experiments performed on multi-type aircraft remote sensing images (MTARSI) dataset show that the proposed architecture achieves 97.07%, 94.81%, and 94.44% accuracy rates for training, validation and testing sets. The results approve that, the architecture outperforms state of the art models.
This paper presents a clothing recommendation system for women based on their body type, aiming to facilitate the purchasing process on the online sales channel of the company Lady's Confecciones located in the city of Santa Marta,... more
This paper presents a clothing recommendation system for women based on their body type, aiming to facilitate the purchasing process on the online sales channel of the company Lady's Confecciones located in the city of Santa Marta, Colombia. For this process, a user interface was designed to function in two ways: using a prediction model that takes as inputs a photograph of the user and their height, and a manual mode that receives the measurements of bust, hip and waist. The prediction model implemented the OpenCV library and the skinned multi-person linear (SMPL) model to process images and predict body shape and pose. Five body types were considered: triangle, apple, rectangle, hourglass and inverted triangle, differentiated by bust, waist and hip measurements, according to the conditions provided by the company. The system was able to predict the body measurements of the female participants with a maximum Pearson correlation coefficient of 0.97. For predicting body type, the best results were obtained for the rectangle body shape, with an accuracy of 92.31%.
Load balancing (LB) is very critical in cloud computing because it keeps nodes from being overloading while others are idle or underutilized. Maintaining the quality of service (QoS) characteristics like response time, throughput, cost,... more
Load balancing (LB) is very critical in cloud computing because it keeps nodes from being overloading while others are idle or underutilized. Maintaining the quality of service (QoS) characteristics like response time, throughput, cost, makespan, resource utilization, and runtime is difficult in cloud computing due to load balancing. A robust resource allocation strategy contributes to the end user receiving high-quality cloud computing services. An effective LB strategy should improve and deliver required user satisfaction by efficiently using the resources of virtual machines (VM). The Q-learning method and the honey bee foraging load balancing algorithm were combined in this study. This hybrid combination of a load balancing algorithm and a machine learning method has reduced the runtime of load balancing activities and makespan, and increased task throughput in a cloud computing environment thereby enhancing routing activities. It achieved this by continuously tracking the usage histories of the VMs and altering the usage matrix to send jobs to the VMs with the best usage histories.
This paper presents a multi-agent license plate recognition system, specifically designed to address the diverse and challenging nature of license plates. Utilizing a multi-agent architecture with agents operating in individual Docker... more
This paper presents a multi-agent license plate recognition system, specifically designed to address the diverse and challenging nature of license plates. Utilizing a multi-agent architecture with agents operating in individual Docker containers and orchestrated by Kubernetes, the system demonstrates remarkable adaptability and scalability. It leverages advanced neural networks, trained on a comprehensive dataset, to accurately identify various license plate types under dynamic conditions. The system's efficacy is showcased through its threelayered approach, encompassing data collection, processing, and result compilation, significantly outperforming traditional license plate recognition (LPR) systems. This innovation not only marks a technological leap in license plate recognition but also offers strategic solutions for enhancing traffic management and smart city infrastructure globally. This is an open access article under the CC BY-SA license.
Chronic heart failure (CHF) is a significant public health concern due to its increasing prevalence, high number of hospital admissions, and associated mortality. Its prevalence is progressively increasing due to the aging of the... more
Chronic heart failure (CHF) is a significant public health concern due to its increasing prevalence, high number of hospital admissions, and associated mortality. Its prevalence is progressively increasing due to the aging of the population and the decrease in mortality from acute myocardial infarction, among other medical advancements. Consequently, the incidence of CHF predominantly affects older age groups, doubling its prevalence every decade, becoming one of the main causes of mortality in patients older than 65 years. The main objective of this study is to apply machine learning based techniques to determine the best models to classify patients with chronic heart failure through their respiratory pattern. These patterns have been characterized from time series such as inspiratory and expiratory times, breathing duration, and tidal volume obtained from the respiratory flow signal. Based on the behavior of the respiratory pattern, CHF patients were classified into patients with non-periodic breathing, with periodic breathing, and with Cheyene-Stokes respiration (CSR). Time-frequency and statistical techniques have been implemented to analyze these features, and then various classification methods have been applied to define the optimal model with the best accuracy rates. These models could help to better understand the evolution of this disease and in early diagnosis.
Coronary heart disease (CHD) is a leading global cause of death. Early detection is the right step to reduce mortality rates and treatment costs. Early detection can be developed using machine learning by utilizing patient medical record... more
Coronary heart disease (CHD) is a leading global cause of death. Early detection is the right step to reduce mortality rates and treatment costs. Early detection can be developed using machine learning by utilizing patient medical record datasets. Unfortunately, this dataset has excessive features which can reduce machine learning performance. For this reason, it is necessary to reduce the number of redundant features and irrelevant data to improve machine learning performance. Therefore, this research proposes a tiered of feature selection model with genetic algorithm (GA) and particle swarm optimization (PSO) to improve the performance of the diagnosis model. The feature selection model is evaluated using parameters derived from the confusion matrix and using the CatBoost machine learning algorithm. Model testing uses z-Alizadeh Sani, Cleveland, Statlog, and Hungarian datasets. The best results for this model were obtained on the z-Alizadeh Sani dataset with 6 selected features from 54 features and the resulting performance for accuracy parameters was 99.32%, specificity 98.57%, sensitivity 100.00%, area under the curve (AUC) 99.28%, and F1-Score 99.37%. The proposed feature selection model is able to provide machine learning performance in the very good category. The diagnostic model proposed is of excellent standard.
One of the most critical aspects of a software piece is its vulnerabilities. Regardless of the years of experience, type of project, or the size of the team, it is impossible to avoid introducing vulnerabilities while developing or... more
One of the most critical aspects of a software piece is its vulnerabilities. Regardless of the years of experience, type of project, or the size of the team, it is impossible to avoid introducing vulnerabilities while developing or maintaining software. This aspect becomes crucial when the software is deployed in production or released to the final users. At that point finding vulnerabilities becomes a race between the developers and malicious intruders, whoever finds it first can either exploit it or fix it. Acknowledging this situation and using the tools and standards that we have available in the field, such as common vulnerability exposures and common vulnerability scoring systems, and based on modern researches, in this study, we propose to have an approach different from the common practices of manual classification, using a 2-layer convolutional neuronal network (CNN) to automatize the classification of vulnerabilities, speeding up this process and enabling developers to have a faster response towards vulnerabilities, producing safer software. The experimental results obtained in this study suggest that pre-trained word embeddings contributed to an increase in accuracy of approximately 2% and the overall accuracy become 0.816%.
Amidst the coronavirus disease 2019 (COVID-19) pandemic, researchers are exploring innovative approaches to enhance diagnostic accuracy. One avenue is utilizing deep learning models to analyze lung X-ray images for COVID-19 diagnosis,... more
Amidst the coronavirus disease 2019 (COVID-19) pandemic, researchers are exploring innovative approaches to enhance diagnostic accuracy. One avenue is utilizing deep learning models to analyze lung X-ray images for COVID-19 diagnosis, complementing existing tests like reverse transcription polymerase chain reaction (RT-PCR). However, trusting these models, often viewed as black boxes, presents a challenge. To address this, six explainable artificial intelligence (XAI) techniques: local interpretable model agnostic explanations (LIME), Shapley additive explanations (SHAP), integrated gradients, smooth-grad, gradient-weighted class activation mapping (Grad-CAM), and Layer-CAM are applied to interpret four transfer learning models. These models: VGG16, ResNet50, InceptionV3, and DenseNet121 are analyzed to understand their workings and the rationale behind their predictions. Validating the results with medical experts poses difficulties due to time and resource constraints, alongside the scarcity of annotated X-ray datasets. To address this, a voting mechanism employing different XAI methods across various models is proposed. This approach highlights regions of lung infection, potentially reducing individual model biases stemming from their structures. If successful, this research could pave the way for an automated system for annotating infection regions, bolstering confidence in predictions and aiding in the development of more effective diagnostic tools for COVID-19.
Extracting buildings from remote sensing imagery (RSI) is an essential task in a wide range of applications, such as urban and monitoring. Deep learning has emerged as a powerful tool for this purpose, and in this research, we propose an... more
Extracting buildings from remote sensing imagery (RSI) is an essential task in a wide range of applications, such as urban and monitoring. Deep learning has emerged as a powerful tool for this purpose, and in this research, we propose an advanced building extraction method based on SE-ResNet18 and SE-ResNet34 architectures. These models were selected through a rigorous comparative analysis of various deep learning models, including variations of residual networks (ResNet), squeeze-and-excitation residual networks (SE-ResNet), and visual geometry group (VGG), for their high performance in all metrics and their computational efficiency. Our proposed methodology outperformed all other models under consideration by a significant margin, demonstrating its robustness and efficiency. It achieved superior results with less computational effort and time, a testament to its potential as a powerful tool for semantic segmentation tasks in remote sensing applications. An extensive comparative evaluation involving a wide range of state-of-the-art works further validated our method's effectiveness. Our method achieved an unparalleled intersection over union (IoU) score of 88.51%, indicative of its exceptional accuracy in identifying and segmenting buildings within the Wuhan University (WHU) building dataset. The overall performance of our method, which offers an excellent balance between high performance and computational efficiency, makes it a compelling choice for researchers and practitioners in the field.
Malaria is a significant global health issue, responsible for the highest rates of morbidity and mortality globally. This paper introduces a very effective and precise convolutional neural network (CNN) method that employs advanced deep... more
Malaria is a significant global health issue, responsible for the highest rates
of morbidity and mortality globally. This paper introduces a very effective
and precise convolutional neural network (CNN) method that employs
advanced deep learning techniques to automate the detection of malaria in
images of red blood cells (RBC). Furthermore, we present an emerging and
efficient deep learning method for differentiating between cells infected with
malaria and those that are not infected. To thoroughly evaluate the efficiency
of our approach, we do a meticulous assessment that involves comparing
different deep learning models, such as ResNet-50, MobileNet-v2, and
Inception-v3, within the domain of malaria detection. Additionally, we
conduct a thorough comparison of our proposed approach with current
automated methods for malaria identification. An examination of the most
current techniques reveals differences in performance metrics, such as
accuracy, specificity, sensitivity, and F1 score, for diagnosing malaria.
Moreover, compared to existing models for malaria detection, our method is
the most successful, achieving an accurate score of 1.00 in all statistical
matrices, confirming its promise as a highly efficient tool for automating
malaria detection.
The rapid evolution of network communication technologies has led to the emergence of new forms of malware and cybercrimes, posing significant threats to user safety, network infrastructure integrity, and data privacy. Despite efforts to... more
The rapid evolution of network communication technologies has led to the emergence of new forms of malware and cybercrimes, posing significant threats to user safety, network infrastructure integrity, and data privacy. Despite efforts to develop advanced algorithms for detecting malicious activity, constructing models that are both accurate and reliable remains a challenge, especially in handling vast and dynamically shifting data patterns. The prevalent bag-of-words (BOW) method, while widely used, falls short in capturing crucial spatial and sequence information vital for detecting malware patterns. To address this challenge, the work presented in this paper proposes hybrid convolution neural network-long short-term memory network (CNN-LSTM) combination models, leveraging CNN's spatial information extraction and LSTM's temporal modeling capabilities. Focused on predicting the infiltration of malicious software into personal computers, the proposed hybrid CNN-LSTM model considers factors such as location, firmware version, operating system, and anti-virus software. The proposed models undergo training and evaluation using Microsoft's malware dataset, demonstrating superior performance compared to traditional CNN and LSTM models. The CNN-LSTM model achieves an impressive accuracy of 95% on the Microsoft malware dataset, highlighting its effectiveness in malware detection.
Automatic personality recognition is a task that attempts to automatically infer personality traits from a variety of data sources, including Text. Our words, whether spoken or written, reveal a lot about who we are. As people speak... more
Automatic personality recognition is a task that attempts to automatically infer personality traits from a variety of data sources, including Text. Our words, whether spoken or written, reveal a lot about who we are. As people speak different languages, each with its own set of characteristics and level of complexity, identifying their personalities automatically might be language-dependent. This task requires an annotated text corpus with personality traits. However, the lack of corpora for languages other than English makes the task extremely challenging. We concentrated our efforts in this paper on the Arabic language in particular because it is understudied and lacks a corpus, despite being one of the most widely spoken languages in the world. Our primary goal was constructing our "MSAPersonality" dataset, which consists of 267 texts in modern standard Arabic that have been annotated with the Big Five personality traits. To evaluate the dataset and its potential for classification and regression, we used text preprocessing techniques, feature extraction, and machine learning algorithms. We obtained promising experimental results. Therefore, further research into predicting personality from Arabic text can be conducted.
The article discusses the semantic aspects of Kazakh sign language and its characteristics. Semantics, a field within linguistics, focuses on examining the meanings conveyed by expressions and combinations of signs. The author delves into... more
The article discusses the semantic aspects of Kazakh sign language and its characteristics. Semantics, a field within linguistics, focuses on examining the meanings conveyed by expressions and combinations of signs. The author delves into the inquiry of the degree of similarity between verbal and sign languages, highlighting their fundamental distinctions. The primary objective of the research is to scrutinize the characteristics of parts of speech in the Kazakh language when expressed gesturally, along with the principles governing the translation of verbs and adverbial tenses. The article explains in detail the formulas for translating the text into sign language, based on the subject-object-predicate. Examples are given that illustrate the subject-object relationship and determine who acts as the speaker, "object" or "subject" of the utterance. It is necessary to note that for successful translation it is necessary first to understand the meaning of the sentence. The article concludes by emphasizing the importance of understanding both structural elements and contextual nuances in the fascinating world of the semantics of the Kazakh sign language. It inspires further research aimed at uncovering the complexities and exceptions that contribute to a deep understanding of linguistic nuances in this unique form of communication.
Heart disease remains a leading cause of mortality worldwide, prompting healthcare researchers to leverage analytical tools for comprehensive data analysis. This study focuses on exploring crucial parameters and employing deep learning... more
Heart disease remains a leading cause of mortality worldwide, prompting healthcare researchers to leverage analytical tools for comprehensive data analysis. This study focuses on exploring crucial parameters and employing deep learning (DL) techniques to enhance understanding and prediction of cardiovascular disease (CVD) risk factors. Utilizing SPSS and Weka tools, a cross-sectional and correlational design was employed to analyze extensive medical datasets. Binomial regression analysis revealed significant associations between age (= 0.004) and body mass index (= 0.002) with CVD development, highlighting their importance as risk factors. Leveraging Weka's DL algorithms, a predictive model was constructed to classify CVD causes. Particularly, convolutional neural networks (CNN) showcased remarkable accuracy, reaching 98.64%. The findings underscore the elevated risk of CVD among university students and employees in Saudi Arabia, emphasizing the need for heightened awareness and preventive measures, including dietary improvements and increased physical activity. This study underscores the importance of further research to enhance CVD risk perception among students and individuals in similar settings.
The integration of cutting-edge technologies into swiftlet farming has greatly enhanced efficiency, productivity, and sustainability. The internet of things (IoT) provides farmers with up-to-date environmental data, enabling them to... more
The integration of cutting-edge technologies into swiftlet farming has greatly
enhanced efficiency, productivity, and sustainability. The internet of things
(IoT) provides farmers with up-to-date environmental data, enabling them to
create and sustain ideal circumstances for swiftlets. Artificial intelligence
(AI) enhances this process by analyzing vast databases and providing
farmers with well-informed choices to optimize yield. Biotechnology, by
combining genetic selection and breeding programs, effectively connects
with the IoT, enabling constant monitoring and control of the health and
genetic traits of swiftlets. The integration of renewable energy technology
seeks to diminish dependence on conventional energy sources, promoting
sustainability. In this paper, a systematic review of the literature examines
the utilization of digital technology in the swiftlet farmhouse. The findings
were classified into three main themes: smart monitoring and control
systems, advanced bird detection techniques, sustainable practices and
innovative approaches, specifically in the manufacture of edible bird nests.
This systematic literature review emphasizes the multidisciplinary nature of
swiftlet farming's technological evolution, technology developers,
challenges and recommendations that farmers and the industry face in their
pursuit of sustainable growth.
Efficient offloading and scientific task scheduling are crucial for managing computational tasks in research environments. This involves determining the optimal location for executing a workflow task and allocating the task to computing... more
Efficient offloading and scientific task scheduling are crucial for managing computational tasks in research environments. This involves determining the optimal location for executing a workflow task and allocating the task to computing resources to optimize performance. The challenge is to minimize completion time, energy consumption, and cost. This study proposes three methods: latency-centric offloading (LCO) for delay-sensitive applications; energy-based offloading (EBO) for energy-saving; and efficient offloading (EO) for balanced task distribution across tiers. Scheduling in this paper uses a genetic algorithm (GA) with a weighted sum objective function considering makespan, cost, and energy for internet of things-cloud-fog (IoT-fog-cloud). Comparative studies involving montage, Cybershake, and epigenomics workflows indicate that LCO excels in terms of makespan and cost but ranks the lowest in energy. EBO excels in energy efficiency, aligning closely with the base method. EO competes effectively with the base method in terms of makespan and cost but consumes more energy. This research enables the selection of the most suitable method based on the type of application and its prioritization of makespan, energy, or cost.
This review of Association of Southeast Asian Nations (ASEAN) augmented reality (AR) and virtual reality (VR) studies uses bibliometric analysis and VOSviewer mapping. This study looks at an extensive set of Scopus articles from reliable... more
This review of Association of Southeast Asian Nations (ASEAN) augmented reality (AR) and virtual reality (VR) studies uses bibliometric analysis and VOSviewer mapping. This study looks at an extensive set of Scopus articles from reliable sources to determine who contributes to ASEAN AR and VR research, the themes, how people work together, and how people cite each other. A study of bibliographies shows that the number of ASEAN AR and VR research articles has grown significantly since 2010. It also talks about important ASEAN study institutions, authors, and countries. The study themes are shown visually on VOSviewer mapping, showing how AR and VR can be used in healthcare, travel, gaming, and business. Co-authorship and reference networks shed light on how people work together on research projects and how ideas move within and outside of ASEAN. This organized review of ASEAN AR and VR research helps researchers, policymakers, and business stakeholders understand the current situation, find research gaps, and work together. The results can change research, resource use, and policy changes to encourage the growth and use of AR and VR technologies in ASEAN. It can lead to more innovation, economic development, and positive social effects.
Ensuring fairness in the utilization of government-funded public facilities, such as co-working spaces, sports fields, and meeting rooms, is imperative to accommodate all citizens. However, meeting these requirements poses a significant... more
Ensuring fairness in the utilization of government-funded public facilities, such as co-working spaces, sports fields, and meeting rooms, is imperative to accommodate all citizens. However, meeting these requirements poses a significant challenge due to the high costs associated with maintaining digital infrastructure, employee wages, and cybersecurity expenses. Fortunately, Blockchain smart contracts present an economical and secure solution for managing digital infrastructure. They offer a pay-per-transaction schema, immutable transaction records, and role-based data updates. Despite these advantages, public blockchains raise concerns about data privacy since records are publicly readable. To address this issue, this study proposes a privacy-preserving mechanism for public facilities' reservation systems. The approach involves encrypting the reservation table with fully-homomorphic encryption (FHE). By employing FHE with binary masking and polynomial evaluation, the reservation table can be updated without decrypting the data. Consequently, citizens can discreetly book facilities without revealing their identities and eliminating the risk of overlapping schedules. The proposed system allows anyone to verify reservations without disclosing requested data and table contents. Moreover, the system operates autonomously without the need for human administration, ensuring enhanced user privacy.
This article presents the proper organization of the supply chain to meet consumer demand, which is crucial for modern commercial enterprises involved in the sale of various products. Studies indicate that a company's success is linked to... more
This article presents the proper organization of the supply chain to meet consumer demand, which is crucial for modern commercial enterprises involved in the sale of various products. Studies indicate that a company's success is linked to the satisfaction of its customers. To optimize the supply chain, this study will consider the use of artificial neural network models. The results of this model will seek a balance between demand and supply, helping determine the necessary quantity of goods to satisfy demand and prevent overproduction. By using this model, the company can fully meet the needs of its customers. Additionally, the company saves its resources and labor costs and reallocates them to other tasks. The model demonstrates the optimization of production and supply business processes, as well as an increase in efficiency.
A smart sustainable library is a new form of a library that blends sustainability and smart libraries with an emphasis on ethics. This study focuses on the need for thorough tools to assess the evolving concept of a smart sustainable... more
A smart sustainable library is a new form of a library that blends sustainability and smart libraries with an emphasis on ethics. This study focuses on the need for thorough tools to assess the evolving concept of a smart sustainable library, especially within Malaysian higher education. This study emphasizes the need for a comprehensive tool that combines smart library, sustainability practices, and ethical values in libraries. Developed and conducted a pilot study to validate a new instrument designed to assess these intertwined aspects thoroughly. By distributing a survey to 30 librarians from different academic institutions in Malaysia, we used statistical measures such as Cronbach's alpha, omega, and corrected item-total correlation to assess the validity and reliability of the instrument. The results showed a high level of reliability with Cronbach's alpha at 0.929 and Omega at 0.918, suggesting that the instrument has strong internal consistency and could be effective for wider use. Our research indicates that the newly developed instrument effectively captures the complex nature of smart sustainable libraries, demonstrating its potential for future research and practical use in the field. This research significantly contributes to the library science field by offering a validated tool to evaluate smart sustainable library development.
Energy limitation is one of the essential parameters in the design of a Wireless body area networks (WBANs) as it is important to improve the lifetime of the network. WBAN routing is an effective approach for establishing energy... more
Energy limitation is one of the essential parameters in the design of a Wireless body area networks (WBANs) as it is important to improve the lifetime of the network. WBAN routing is an effective approach for establishing energy efficiency sets and assign time slots for the network. Many algorithms that deal with interference model treats the whole WBAN as a minimum interference unit and increase their lifetime cycle. In this research, we report an effective low-energy adaptive clustering hierarchy (LEACH) routing protocol using MATLAB simulation and related C++ simulation codes to enhance the overall performance of the network by improving the energy efficiency and network lifetime cycles. Furthermore, the study sheds light up on the comparison of the protocol and proposes a modified protocol for WBAN. Based on the results obtained from conducting different configurations in the proposed design, the base station should be situated near the network to insure high network performance.
A system that recognizes the iris is susceptible to presentation attacks (PAs), in which a malicious party shows artefacts such as printed eyeballs, patterned contact lenses, or cosmetics to obscure their personal identity or manipulate... more
A system that recognizes the iris is susceptible to presentation attacks (PAs), in which a malicious party shows artefacts such as printed eyeballs, patterned contact lenses, or cosmetics to obscure their personal identity or manipulate someone else's identity. In this study, we suggest the dual channel DenseNet presentation attack detection (DC-DenseNetPAD), an iris PA detector based on convolutional neural network architecture that is dependable and effective and is known as DenseNet. It displays generalizability across PA datasets, sensors, and artifacts. The efficiency of the suggested iris PA detection technique has been supported by tests performed on a popular dataset which is openly accessible (LivDet-2017 and LivDet-2015). The proposed technique outperforms state-of-the-art techniques with a true detection rate of 99.16% on LivDet-2017 and 98.40% on LivDet-2015. It is an improvement over the existing techniques using the LivDet-2017 dataset. We employ Grad-CAM as well as t-SNE plots to visualize intermediate feature distributions and fixation heatmaps in order to demonstrate how well DC-DenseNetPAD performs.
This research aim is to propose a machine learning approach to automatically evaluate or categories hospital quality status using quality indicator data. This research was divided into six stages: data collection, preprocessing, feature... more
This research aim is to propose a machine learning approach to automatically evaluate or categories hospital quality status using quality indicator data. This research was divided into six stages: data collection, preprocessing, feature engineering, data training, data testing, and evaluation. In 2020, we collected 5,542 data values for quality indicators from 658 Indonesian hospitals. However, we analyzed data from only 275 hospitals due to inadequate submission. We employed methods of machine learning such as decision tree (DT), gaussian naïve Bayes (GNB), logistic regression (LR), k-nearest neighbors (KNN), support vector machine (SVM), linear discriminant analysis (LDA) and neural network (NN) for research archive purposes. Logistic regression achieved a 70% accuracy rate, SVM a 68% accuracy rate, and neural network a 59.34% of accuracy. Moreover, K-nearest neighbors achieved a 54% of accuracy and decision tree a 41% accuracy. Gaussian-NB achieved a 32% accuracy rate. The linear discriminant analysis achieved the highest accuracy with 71%. It can be concluded that linear discriminant analysis is the algorithm suitable for hospital quality data in this research.
Internet behavior models have found applications across diverse domains, notably in internet addiction, customer satisfaction analysis, user purchasing behavior prediction, and optimizing internet of things (IoT) sensor performance.... more
Internet behavior models have found applications across diverse domains,
notably in internet addiction, customer satisfaction analysis, user purchasing
behavior prediction, and optimizing internet of things (IoT) sensor
performance. However, a notable gap exists in exploring these models in
enhancing internet quality of service (QoS), specifically in campus settings,
intricately linked to the nuances of students' online behavior. This study
elucidates the strategic utilization of internet behavioral models for
augmenting internet QoS and facilitating user behavior analysis. Creating
datasets grounded in internet users' access behavior represents a pivotal
phase, with explicit, implicit, and mixed methods emerging as the prevailing
approaches. In this comprehensive literature review, we systematically
scrutinized the methods, techniques, and inherent characteristics of
constructing internet behavior models according to a systematic literature
review process. The qualitative findings extracted from the systematic
review encapsulated 1,046 articles, meticulously classified according to
predefined inclusion and exclusion criteria. Subsequently, 35 articles were
judiciously selected for in-depth analysis. This study culminated in
identifying the most pertinent methodologies and salient features pivotal to
construct robust internet behavior model for improving internet QoS and
user experience.
With technological advancement worldwide, the video surveillance market is growing drastically in a versatile field. Monitoring, browsing, and retrieving a specific object in a long video becomes difficult due to the enormous amount of... more
With technological advancement worldwide, the video surveillance market is growing drastically in a versatile field. Monitoring, browsing, and retrieving a specific object in a long video becomes difficult due to the enormous amount of data produced by the surveillance camera. With limitations on human resources and browsing time, there is a need for a new video analytics model to handle more complex tasks, such as object detection and query retrieval. The current approach involves techniques like unsupervised segmentation, multiscale segmentation, and feature-based descriptions. However, these methods often encounter extensive space and time consumption challenges. A solution has been developed for retrieving targeted objects from surveillance videos via user queries, employing a graphical interface for input. Extracting relevant frames based on userentered text queries is enabled through using YOLOv8 for object detection. Users interact through a graphical user interface deployed on a Jetson Xavier Development board. The system's outcome is a time-efficient and highly accurate automated model for object detection and query retrieval, eliminating human errors associated with manually locating objects in videos upon user queries.
Generating entities recommendations has attracted considerable interest in recent years. Most recently published works mainly focus on providing a user with the most relevant and/or personalized entity recommendations that score highly... more
Generating entities recommendations has attracted considerable interest in recent years. Most recently published works mainly focus on providing a user with the most relevant and/or personalized entity recommendations that score highly against the query and/or the user's preference. Some works consider user side information, such as the user network, user relations, and user's demographic information, and propose to integrate them into the framework of recommender systems. These approaches have been shown to increase the users' satisfaction and engagement with the system. In this paper, we investigate entities recommender systems and summarize the recent efforts in this domain by categorizing approaches. The first category presents different approaches that utilize knowledge graph as side information. The second category gathers work that consider both the current query, and the users' previous interactions with the system. These latter works have considered the full user history to personalize the ranking of recommended entities related to the query. In this review paper, we emphasize contextual information-based approaches that utilize user's context and feedback to improve the recommendations. We accomplished a summary of the literature and synthesized the papers according to different perceptions. Finally, a comparison between approaches is provided and some drawbacks are identified.
Brain tumor is the most constantly diagnosed cancer, and the opinion of the brain is veritably sensitive and complex, which is the subject of numerous studies and inquiries. In computer vision, deep literacy ways, similar as the... more
Brain tumor is the most constantly diagnosed cancer, and the opinion of the brain is veritably sensitive and complex, which is the subject of numerous studies and inquiries. In computer vision, deep literacy ways, similar as the convolutional neural network (CNN), are employed due to their bracket capabilities using learned point styles and their capability to work with complex images. still, their performance is largely dependent on the network structure and the named optimization system for tuning the network parameters. In this paper, we present new yet effective styles for training convolutional neural networks. The maturity of current state-of-the-art literacy styles for convolutional neural networks are grounded on grade descent. In discrepancy to traditional convolutional neural network training styles, we propose an enhancement by incorporating the inheritable algorithm for brain tumor prediction. Our work involves designing a convolutional neural network model to grease the bracket process, training the model using different optimizers (Adam and the inheritable algorithm), and assessing the model through colorful trials on the brain magnetic resonance imaging (MRI) dataset. We demonstrate that the convolutional neural network model trained using the inheritable algorithm performs as well as the Adam optimizer, achieving a bracket delicacy of 99.5.
Software development would have to include automated testing to ensure the finished product and performs as intended. However, the process of test case generation and maintenance can be time-consuming and error-prone, especially when... more
Software development would have to include automated testing to ensure the finished product and performs as intended. However, the process of test case generation and maintenance can be time-consuming and error-prone, especially when manual methods are used. This research proposes a new approach to improve the efficiency and accuracy of automated testing using latent semantic analysis (LSA)-based TextRank (TR) and particle swarm optimization (PSO) algorithms. The study aims to evaluate the effectiveness of these algorithms in generating and optimizing test cases based on requirements analysis. To retrieve key information from the criteria, methods including text classification (TC), named entity recognition (NER), and sentiment analysis (SA) are used to evaluate the text. Test cases are then generated using LSA-based TR for text summarization and PSO for optimization. The aim of this work is to identify any limitations that need to be addressed and to evaluate the overall efficiency and accuracy of automated testing (AT) using proposed algorithms. The results of this research are expected to have important implications for the software industry, helps to improve the overall efficiency and accuracy of AT. The findings could guide future research that led to the creation of more advanced and effective tools for AT.
As cyberattacks are getting more complex and sophisticated, stringent, multi-layered security measures are required. Existing approaches often rely on tokenization or encryption algorithms, both of which have drawbacks. Previous attempts... more
As cyberattacks are getting more complex and sophisticated, stringent, multi-layered security measures are required. Existing approaches often rely on tokenization or encryption algorithms, both of which have drawbacks. Previous attempts to ensure data security have primarily focused on tokenization techniques or complex encryption algorithms. While these methods work well on their own, they have proven vulnerable to sophisticated cyberattacks. This research presents new ways to improve data security in digital storage and communication systems. We solve data security issues by proposing a multi-level encryption strategy that combines double encryption technology along with tokenization. The first step in the procedure is a byte-level byte-pair encoding (BPE) tokenizer, which tokenizes the input data and adds a layer of protection to make it unreadable. After tokenization, data is encrypted using Rivest-Shamir-Adleman (RSA) to create a strong initial level of security. To further enhance security, data encrypted with RSA has an additional layer of encryption applied using the advanced encryption standard (AES) method. This article describes how this approach is implemented in practice and shows how it is effective in protecting data at a higher level than single-layer encryption or tokenization systems.
This article describes the characterization of facial and ocular gestures using the electroencephalogram (EEG) method connected with an EMOTIV EPOC+ Brainwear ® device. This characterization is developed by the storage of raw data... more
This article describes the characterization of facial and ocular gestures using the electroencephalogram (EEG) method connected with an EMOTIV EPOC+ Brainwear ® device. This characterization is developed by the storage of raw data (unprocessed data) acquired by the device. The experiment was applied to nine subjects, considering that EEG explores neurophysiologically with high levels of statistical confidence the bioelectric activity in the brain in the condition of resting state such as wakeups or dreaming states. In contrast to non-resting states, the registered data showed a random and distinct activation of hyperpnea and intermittent luminous stimulus. Despite the reduced number of samples in the experiment, the results showed that the level of confidence was greater than 75%. The data was characterized and processed by a support vector machine (SVM).
The automation industries have been developing since the first demonstration in the period 1980 to 2000 it is mainly used on automated driving vehicle. Now a day's automotive companies, technology companies, government bodies, research... more
The automation industries have been developing since the first demonstration in the period 1980 to 2000 it is mainly used on automated driving vehicle. Now a day's automotive companies, technology companies, government bodies, research institutions and academia, investors and venture capitalists are interested in autonomous vehicles. In this work, object detection on road is proposed, which uses deep learning (DL) algorithms. You only look once (YOLO V3, V4, V5). In this system object detection on the road data set is taken as input and the objects are mainly on-road vehicles, traffic signals, cars, trucks and buses. These inputs are given to the models to predict and detect the objects. The Performance of the proposed system is compared with performance of deep learning algorithms convolution neural network (CNN). The proposed system accuracy greater than 76.5% to 93.3%, mean average precision (Map) and frame per second (FPS) are 0.895 and 43.95%.
Sentiment analysis is a method of analyzing data to identify its intent. It identifies the emotional tone of a text body. Aspect-based sentiment analysis is a text analysis technique that identifies the aspect and the sentiment associated... more
Sentiment analysis is a method of analyzing data to identify its intent. It
identifies the emotional tone of a text body. Aspect-based sentiment analysis
is a text analysis technique that identifies the aspect and the sentiment
associated with each aspect. Different organizations use aspect-based
sentiment analysis to analyze opinions about a product, service, or idea.
Traditional sentiment analysis methods analyze the complete text and assign
a single sentiment label to it. They do not handle the tasks of aspect
association, dealing with multiple aspects and inclusion of linguistic
concepts together as a system. In this article, AlgoDM, an algorithm to
perform aspect-based sentiment analysis is proposed. AlgoDM uses a novel
concept of IDistance matrix to extract aspects, associate aspects with
sentiment words, and determine the sentiment associated with each aspect.
The IDistance matrix is constructed to calculate the distance between aspects
and the words expressing the sentiment related to the aspect. It works at the
sentence level and identifies the opinion expressed on each aspect appearing
in the sentence. It also evaluates the overall sentiment expressed in the
sentence. The proposed algorithm can perform sentiment analysis of any
opinionated text.
Early detection and diagnosis of breast cancer are critical for saving lives. This paper addresses two major challenges associated with this task: the vast amount of data processing involved and the need for early detection of breast... more
Early detection and diagnosis of breast cancer are critical for saving lives. This paper addresses two major challenges associated with this task: the vast amount of data processing involved and the need for early detection of breast cancer. To tackle these issues, we developed thirty hybrid architectures by combining five deep learning techniques (Xception, Inception-V3, ResNet50, VGG16, VGG19) as feature extractors and six classifiers (random forest, logistic regression, naive Bayes, gradient-boosted tree, decision tree, and support vector machine) implemented on the Spark framework. We evaluated the performance of these architectures using four classification criteria. The results, analyzed using Scott Knott's statistical test, demonstrated the effectiveness of merging deep learning feature extraction techniques with traditional classifiers for classifying breast cancer into malignant and benign tumors. Notably, the hybrid architecture using logistic regression as the classifier and ResNet50 for feature extraction (RESLR) emerged as the top performer. It achieved impressive accuracy scores of 98.20%, 96.59%, 96.64%, and 94.84% across the Break-His dataset at different magnifications (40X, 100X, 200X, and 400X) respectively. Additionally, RESLR achieved an accuracy of 97.05% on the ICIAR dataset and a remarkable accuracy of 95.31% on the FNAC dataset.
This research aims to identify the principles of designing the objects of the inclusive environment with the employment of additive manufacturing technologies, and to discover methods and techniques for creating an inclusive objective... more
This research aims to identify the principles of designing the objects of the
inclusive environment with the employment of additive manufacturing
technologies, and to discover methods and techniques for creating an
inclusive objective environment using the example of our own development.
The results of the survey, which has been directed to investigate the
topicality of the problem of inclusiveness in Ukraine and the means of its
solution, are presented in the article. In the course of work, the principal
peculiarities of three-dimensional (3D) modelling and printing technologies
have been established, and promising areas of their application have been
proposed. The principles of designing an inclusive objective environment
have been detected with the use of photogrammetry and 3D printing, due to
which the model can be constructed by considering a person’s individual
physical characteristics. Moreover, due to the wide range of materials for 3D
printing, various types of objects can be realized. It gives great potential for
the employment of 3D printing when designing an inclusive environment
and considerably simplifies the manufacturing process while taking the
individual characteristics of every person into consideration.
Given the increase of information circulating through public channels, it is essential to create robust schemes to ensure the security of such information. The results presented here were part of the research project entitled computer... more
Given the increase of information circulating through public channels, it is essential to create robust schemes to ensure the security of such information. The results presented here were part of the research project entitled computer security models based on mathematical tools and artificial intelligence. An algorithm focused on the encryption of images carrying steganographed texts is proposed, using chaos, artificial vision and coding based in deoxyribonucleic acid (DNA). The process consists of steganographic and cryptographic steps. In the steganographic stage, a color image was taken, the combined Canny and Sobel filters were applied to achieve its dilated edges, using Chen's chaotic attractor, the positions of the edges were selected, to hide a text in binary ASCII code using the least significant bit technique. In the encryption stage, Chen's chaotic system was used to permute the stego-image and to create a chaotic image used in the diffusion process. These two images were divided into blocks represented in DNA coding, selecting the rule to apply through the three-dimensional Logistics system, and finally applying the XOR operation by layers, obtaining a single encrypted image. To validate the proposed model, safety and performance tests were applied, obtaining comparable indicators with some current scientific references.

And 4535 more