Special Issue: Technologies and Their Effects on Real-Time Social Development
https://www.ubplj.org/index.php/tesd
University of Buckingham Pressen-USSpecial Issue: Technologies and Their Effects on Real-Time Social Development 1479-87511D-CNN: One Dimensional Convolution Neural Network-based Electroencephalogram (EEG) Signals Classification with Efficient Artifact Removal for Real-Time Medical Applications
https://www.ubplj.org/index.php/tesd/article/view/2524
<p>Mental task detection and classification employing solo/restricted channel(s) electroencephalogram (EEG) signals in actual time perform a significant part in the pattern of mobile brain-computer interface (BCI) and neurofeedback (NFB) schemes. Nevertheless, the actual time registered EEG signals remain frequently adulterated with noises like ocular artifacts (OAs) and muscle artifacts (MAs) that decline the handmade features extracted out of EEG signal leading to insufficient detection and classification of mental tasks. Hence, we analyse the employment of the latest deep learning approaches that in no way need whatsoever physical feature extraction or artifact repression phase. This study proffers a one-dimensional convolutional neural network (1D-CNN) framework for mental job detection and classification. The proffered framework’s strength can be analysed employing artifact-free and artifact-adulterated EEG signals obtained out of publicly accessible datasets especially the Emotiv EPOC headset. It is observed that the proffered 1D-CNN attains 0.992 of accuracy, 0.993 of precision, 0.9905 of recall, 0.0065 of FPR, and 0.992 of F-measure. Correlative execution assessment exhibit that the proffered framework surpasses prevailing techniques not merely concerning classification precision yet as well as in strength opposing artifacts.</p>Padmini Chattu*Dr. C.V.P.R. Prasad
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2524A New Comic Image Segmentation and Adaptive Differential Evolution Algorithm with Different Times Characteristics
https://www.ubplj.org/index.php/tesd/article/view/2501
<p>Comic scene segmentation is crucial in understanding and analyzing visual storytelling, as it involves identifying and separating distinct elements within a sequence of panels. This paper proposes a novel segmentation approach, Frog Leap Differential Time Series Segmentation (FLDTSS), tailored for analyzing comic images, which often contain complex visual storytelling elements such as expressive characters, dynamic speech bubbles, and background effects. By leveraging time-series features across sequential comic panels, FLDTSS integrates both spatial and temporal cues for more context-aware segmentation. The method was tested on a diverse set of cartoon panels and achieved a precision of 91.6%, recall of 88.3%, and an F1-score of 89.9%, outperforming traditional methods such as Otsu Thresholding (F1-score: 70.6%), Edge-based Canny (76.1%), K-means Clustering (77.8%), Watershed (80.6%), and even Genetic Algorithm-based segmentation (83.2%). The segmentation time for FLDTSS was 1.22 seconds, demonstrating computational efficiency compared to more intensive evolutionary methods. Simulation results showed the model's ability to extract meaningful narrative components such as characters, speech bubbles, emotional cues, and visual effects, with background occupying ~55% of the segmented area, character regions ~22%, and speech bubbles ~8%. This study confirms FLDTSS as a powerful and scalable technique for semantic segmentation and narrative interpretation in visual storytelling formats like comics.</p>Dr. Sivanagireddy Kalli*Dr. Srilakshmi AouthuDr.B. Narendra KumarDr. Yerram Srinivas Dr. S. JagadeeshDr Jatothu Brahmaiah Naik
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2501Anonymous Consistent Reliable LDPC Using IPA and BCS With Unfamiliar Threshold
https://www.ubplj.org/index.php/tesd/article/view/2519
<p>Model-based compressive sensing (CS) for signal specific applications is of particular interest in communication sector. This process cast the problem of signal reconstruction and threshold estimation a one of learning a hyperplane that separates the sampling vectors. This paper presents a comprehensive study on the design and implementation of a Pi rotation-based encoder within the framework of Low-Density Parity-Check (LDPC) codes, focusing on optimizing constraints to enhance performance. Addressing the inherent complexity of the optimal Maximum A Posteriori (MAP) estimator, we propose two suboptimal solutions, including an iterative approach capable of managing large-scale problems. Leveraging interval analysis techniques allows for the rapid exclusion of inconsistent solutions concerning the signal model and quantization noise, thus improving computational efficiency. Additionally, we introduce the Binary Compressive Sensing with Unknown Threshold (BCS-UT) algorithm, which outperforms existing methods despite the lack of knowledge of the threshold. The proposed framework accommodates noisy binary measurements by incorporating slack variables to relax measurement consistency conditions. Furthermore, we introduce two modifications to the Sum-Product Algorithm (SPA) based on the tanh and tanh21 functions, which are essential for enhancing the performance of LDPC codes and reducing the error floor relative to the standard SPA.</p>Pechetti Girish*Bernatin T
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2519Artificial Intelligence Model with Optimization Technique to Improve Job Autonomy
https://www.ubplj.org/index.php/tesd/article/view/2499
<p>This paper presents a novel approach to enhancing workplace well-being and classification accuracy, specifically tailored to the dynamic and competitive IT industry in India. The research leverages Simulated SeaHorse Optimization (SSHO), a nature-inspired optimization technique, to estimate and improve job autonomy and happiness scores in the workplace. Furthermore, SSHO is combined with Long Short-Term Memory (LSTM) networks to create a robust classification model. The study's key findings indicate a direct correlation between the number of SSHO iterations and the enhancement of job autonomy and happiness scores, highlighting the potential of SSHO as an effective tool for optimizing these critical workplace factors. Moreover, the SSHO-LSTM model outperforms traditional models, achieving remarkably high accuracy, precision, recall, and F1-Score in classifying data. The practical implications of this research are significant, as it offers a promising approach for organizations to create a more favorable work environment, ultimately contributing to higher job satisfaction and well-being among employees. This paper advances the understanding of optimization techniques, well-being in the workplace, and intrapreneurial characteristics, providing valuable insights for industry professionals and researchers seeking to improve employee experiences in the IT sector. In conclusion, this paper demonstrates the potential of SSHO and SSHO-LSTM as tools to optimize workplace well-being and enhance classification accuracy, making a substantial contribution to the fields of optimization, machine learning, and workplace well-being in the IT industry.</p>M. Sowjanya*Dr.Madireddi S S V Srikumar
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2499Selective Scan for Near Reversible Data Hiding Using DCT
https://www.ubplj.org/index.php/tesd/article/view/2516
<p>Data Hiding is now a major area of study due to the internet's and multimedia technologies' amazing growth. Since data digitization and communication networking have advanced over the past ten years, protecting data transmitted over the internet has become a major concern. Numerous applications in steganography and watermarking find it desirable when data is integrated into multimedia cover content, such as audio, video, and photos. Watermarking will be used for copyright protection, while steganography will be chosen for encrypted communication. Even when the embedded data has been extracted, irreparable harm to the cover media is usually unavoidable with such techniques. The perfect restoration of the cover medium is necessary for some applications, including remote sensing, military mapping, geographic mapping, and medical diagnostics. Therefore, in sensitive applications like medical diagnosis, reversible data hiding techniques are used to restore the cover media without distortion. Confidential information, the content owner's identity, cover content details, etc., are examples of the data that may be included. The data hiding techniques will be shown in order to determine the trade-off between embedding capacity and visual quality. The frequency coefficients of the cover image are changed by the frequency domain techniques. According to experimental results, the recommended procedures perform better than the current approaches for standard QCIF encoded videos and standard grayscale images utilizing the USC SIPI image database. The suggested method considers the information of the cover content during data embedding and provides a trade-off between visual quality and embedding capacity.</p>T. Bhaskar*M.N.NarsaiahG. Anil KumarJ. SravanthiT. Kiran
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2516Facial Feature Recognition Model for The Sleepiness Detection of The Drivers
https://www.ubplj.org/index.php/tesd/article/view/2497
<p>Facial feature recognition is a pivotal technology in computer vision, enabling accurate identification and analysis of human faces by extracting and analyzing key facial landmarks and attributes. This process involves detecting features such as eyes, nose, mouth, and jawline, as well as capturing finer details like textures, edges, and contours. Advanced techniques leverage machine learning and deep learning algorithms, including convolutional neural networks (CNNs) and deep feature extraction models, to enhance accuracy and robustness. Applications of facial feature recognition span a wide range of domains, from security systems and identity verification to emotion detection, healthcare diagnostics, and personalized user experiences. This study presents the LGFR-DL model, a high-performance deep learning framework designed for accurate classification of drowsiness states in real-time applications. The model effectively identifies Awake, Drowsy, and Sleepy (Critical) states with accuracy levels of 98.5%, 90.0%, and 94.0%, respectively, while maintaining high precision, recall, and F1-scores across all categories. Leveraging fused feature extraction, the LGFR-DL model outperforms traditional CNN and DNN models, achieving a superior ROC-AUC of 0.98 and minimal validation loss of 0.075. With a low latency of 42 ms and robust generalization, the model is optimized for real-world applications like driver monitoring systems. This work underscores the potential of LGFR-DL in advancing safety-critical systems by providing reliable and efficient drowsiness detection, paving the way for improved accident prevention and enhanced operational security.</p>Srinivasa Sai Abhijit Challapalli*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2497GPGPU Scheduling Schemes to Improve Latency and Resource Utilization
https://www.ubplj.org/index.php/tesd/article/view/2514
<p>Real-time embedded systems today need GPU scheduling methods that are fast, flexible, and energy-saving while handling different types of tasks with limited resources. Current schedulers, like federated and hierarchical ones, often have problems giving real-time guarantees, adapting to changing needs, and saving power — especially on AMD GPUs that have limited Local Data Share (LDS) and High Bandwidth Memory (HBM). This paper presents a new Hybrid GPGPU Scheduler that solves these problems. It combines static, dynamic, and machine learning (ML)-based scheduling with task models that are aware of memory limits. Using Gradient Boosting Decision Trees (GBDT)/XGBoost, the scheduler predicts task features and decides if they can meet deadlines, while smartly assigning Compute Units (CUs).</p> <p>Tests on a Linux platform with PoCL and ROCm show that the scheduler beats others like RTGPU, Self-Suspension, and Enhanced MPCP. It improves task scheduling by 11% at high usage (U = 2.0), accepts 85% of tasks when 12 run together, and cuts scheduling delay by 40%. It also saves 9% more energy per watt while keeping power use low (1.0–2.0 W).</p> <p>Resource use stays high, with 92% CU and 85% CPU usage. These results prove that the Hybrid GPGPU Scheduler is a scalable and energy-efficient solution for real-time GPU scheduling in embedded systems, balancing performance, flexibility, and power savings in systems with tasks of different critical levels.</p>Narayana Murty N*Dr.Harjit SinghDr. Sukhmani K. Thethi
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2514Research On The Identification Of Key Industries For Soil And Groundwater Environmental Control And Their Pollution Characteristics In China
https://www.ubplj.org/index.php/tesd/article/view/2494
<p>With rapid industrial development, environmental pollution particularly soil and groundwater has become a critical concern in China. Identification of high-risk industries and implementing targeted management strategies is essential for effective ecological protection and pollution control. This study, systematically screened and identified key industries contributing to soil and groundwater contamination by integrating extensive enterprises data, pollutant discharge profile and national policy framework. Ten major medium-size industries were recognized as priority targets for environmental oversight including coal processing and metal surface treatment and heat treatment. These industries were characterized by high enterprise density, substantial pollutant emissions, and complex production processes. Detailed analysis of the process stages and typical pollutants in these two representative sectors revealed that specific operation are closely linked to the release of heavy metals, persistent organic pollutants, and other hazardous substances. Based on these findings, the study proposes targeted management strategies to strengthen pollution prevention and control at grass root level. The results provide a scientific landscape for regulatory decision making, support the formulation of industry specific guidelines and contributes to the advancement of national efforts for soil and groundwater protection. Additionally, this work highlights the need for risk-based, industry focused approaches to address environmental pollution in the context of sustainable industrial development.</p>Meixi Huo*Rui Zhou
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2494Dynamic Intelligent Channel Assignment Model with Optimized Throughput-based Cognitive UAV Guided Smart Internet of Things Environment
https://www.ubplj.org/index.php/tesd/article/view/2512
<p>The IoT technology allows numerous devices to link up with the Internet and exchange data smoothly. It is predicted that shortly, there will be trillions of these devices connected. As a result, there is a growing demand for spectrum to deploy these devices. Many of these devices operate on unlicensed frequency bands, leading to interference as these bands become overcrowded. A new communication approach known as cognitive radio-based Internet of Things (CR IoT) is rapidly emerging to address this issue and the spectrum scarcity. This involves integrating cognitive radio technology into IoT devices, allowing for dynamic spectrum access, and overcoming interference problems. In current systems, a significant portion of the spectrum designated for primary users (PU) may be underutilized, leaving room for secondary users (SU) to utilize the spectrum. However, the main challenge is that SUs must continuously send packets until they find an available channel in real-world conditions, resulting in excessive communication and packet loss. To overcome these kinds of drawbacks in the network in this article dynamic intelligent channel allocation with optimized throughput-based cognitive UAV guided network model is developed. The major categories that are concentrated in this model are UAV-based cognitive IoT network construction, dynamic intelligent channel assignment model, and optimized throughput calculation process. By utilizing these methods, we can achieve streamlined channel allocation and economical communication, ultimately enhancing the performance of the UAV-guided CRN-based IoT environment. The implementation of this model is carried out in MATLAB software and the parameters that are considered for performance analysis are network throughput, power utilization, energy efficiency, data delivery ratio, and average delay.</p>T. Vijaya Kumar*Dr. Madona B SahaaiDr. C. Sharanya
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2512Foreign Trade Risk Warning Model Based on Deep Learning And Association Rule Mining
https://www.ubplj.org/index.php/tesd/article/view/2492
<p>An economic risk early warning model identifies and assesses potential financial threats by analysing key economic indicators, market trends, and geopolitical events. Using machine learning algorithms, this model continuously monitors data to detect warning signs, such as shifts in inflation, unemployment, or trade imbalances. Early risk alerts enable policymakers and businesses to proactively address vulnerabilities, mitigate financial impacts, and make informed decisions that strengthen economic resilience against potential downturns. This paper presents a comprehensive analysis of risk assessment and financial prediction within the Chinese finance sector. The study investigates the intricate relationships between various financial indicators and their implications for risk assessment with associative rule mining, financial analysis, and predictive modelling techniques. By examining associative rules, the paper illuminates key patterns and associations, offering valuable insights into the drivers of financial risk. Subsequently, a detailed financial analysis is conducted, highlighting the varying risk profiles among prominent players in the sector based on metrics such as debt-to-equity ratio, profit margin, and market capitalization. Furthermore, predictive modelling results provide insights into the effectiveness of predictive models in forecasting the probability of default for financial companies, aiding stakeholders in making informed risk management decisions. The findings underscore the importance of robust financial health and proactive risk management strategies in navigating the dynamic landscape of the Chinese finance sector. For instance, entities like the Industrial and Commercial Bank of China (ICBC) exhibit a low debt-to-equity ratio of 8.2 and a healthy profit margin of 18%, resulting in a low financial risk level. Conversely, companies like Ping An Bank display higher risk profiles with elevated debt levels, such as a debt-to-equity ratio of 10.2, coupled with lower profit margins, resulting in a high financial risk level. Furthermore, predictive modelling results provide insights into the effectiveness of predictive models in forecasting the probability of default for financial companies. For example, Bank of China (BOC) and the Agricultural Bank of China (ABC) exhibit predicted probabilities of default of 12.1% and 15.5%, respectively, aligning closely with their actual default status.</p>Xiang Li Jin Gao* Wenbo Ma
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2492Data Augmentation-Based Diabetic Retinopathy Classification and Grading with the Dynamic Weighted Optimization Approach
https://www.ubplj.org/index.php/tesd/article/view/2532
<p>Diabetic retinopathy is a vision-threatening complication of diabetes that affects the retina, the light-sensitive tissue at the back of the eye. This condition arises as a result of prolonged high blood sugar levels, which can damage the small blood vessels in the retina. Diabetic retinopathy typically progresses through different stages, starting with mild non-proliferative retinopathy, where small blood vessels in the retina become weakened and leak. The classification of diabetic retinopathy plays a fundamental role in assessing the effectiveness of treatment and monitoring the progression of the disease over time, ultimately contributing to the preservation of patients' vision and their overall quality of life. This research paper presents efficient technique for diabetic retinopathy (DR) classification and grading using data augmentation and a dynamic weighted optimization approach. The study contributes to the field of DR in several significant ways. Firstly, advanced data augmentation techniques are employed to generate diverse and representative features from retinal fundus images, enhancing the robustness and generalization capabilities of the models. Secondly, novel segmentation approaches, including multi-level Otsu thresholding and morphological operations, accurately localize and isolate affected regions in retinal images. Thirdly, innovative feature extraction and selection methods, such as Gray-Level Co-occurrence Matrix (GLCM) and dynamic Flemingo optimization, improve the selection of discriminative features for DR classification. Additionally, a novel cascaded voting ensemble deep neural network model is introduced, which combines the predictions of multiple learning algorithms to enhance classification performance. Lastly, the research addresses the grading of diabetic retinopathy by aligning the classification results with a standardized grading system, providing clinicians with accurate severity assessments for effective treatment decisions. Overall, this papers offers valuable insights and methodologies for improving the classification and grading of diabetic retinopathy, thereby contributing to the advancement of diagnosis and management strategies in the field.</p>Dr. Uppalapati SrilakshmiDr. K. Vinay KumarDr.Sirisha KorimilliShiramshetty GouthamJose Mary GolamariDr. Putta Brundavani*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/tesd.v167iA2 (S).2532Advanced Convolutional Neural Network with SqueezeNet Optimization and Transfer Learning for MRI-Based Brain Tumor Segmentation and Classification
https://www.ubplj.org/index.php/tesd/article/view/2510
<p>Multimodal imaging plays a crucial role in the accurate detection, segmentation, and classification of brain tumors by leveraging complementary information from multiple MRI sequences. Each modality provides distinct insights into tumor structure, location, and pathology. In this study, we offer a state-of-the-art deep learning system for efficient and accurate multimodal MRI tumor detection in the brain. The proposed methodology integrates a novel Improved Pyramid Convolutional Neural Network (I-PCNN) with an enhanced Pyramid Nonlocal U-Net (PN-UNET) architecture to leverage both local and global contextual features for precise tumor segmentation. Additionally, the Improved Pyramid Histogram of Oriented Gradients (I-PHOG) technique is introduced for robust feature extraction, preserving essential texture and structural information from different MRI modalities such as T1, T2, and FLAIR. Through comprehensive experiments and comparative analyses against several state-of-the-art models, the proposed system demonstrates superior performance in terms of accuracy, sensitivity, specificity, and Dice coefficient. Furthermore, the model’s performance across different training epochs validates its learning stability and scalability. Simulation results demonstrated that For multimodal fusion with PN-UNET, the proposed I-PCNN achieved the highest classification accuracy of 95.4%, outperforming other models like DCNNBT (94.0%) and Ensemble Deep Learning (94.8%). For individual MRI modalities, the I-PCNN also maintained high accuracy—94.2% for T1, 93.6% for T2, and 92.9% for FLAIR—indicating its robustness across varying input types. In the feature extraction phase, the proposed I-PHOG with PN-UNET yielded the best performance with 95.4% accuracy, 93.2% sensitivity, 97.1% specificity, and a Dice coefficient of 0.91, while maintaining a low feature extraction time of 1.02 seconds. This shows an optimal balance between precision and efficiency. During classification, the proposed I-PCNN with PN-UNET again led with 95.4% accuracy, 93.2% sensitivity, 97.1% specificity, and a Dice score of 0.91, surpassing other models like RanMerFormer (93.2% accuracy, 0.88 Dice) and traditional CNNs (92.5% accuracy, 0.87 Dice)</p>R. Subhan Tilak Basha*Dr. B. P. Santosh Kumar
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2510Binary Sentiment Analysis And Sentiment Marketing Strategy Optimization Of E-Commerce Platform User Comments Based On Deep Learning Algorithm
https://www.ubplj.org/index.php/tesd/article/view/2489
<p>Binary sentiment analysis plays a crucial role in e-commerce marketing strategy optimization by classifying user comments into positive or negative sentiments, allowing businesses to analyze customer feedback, predict user behavior, and enhance personalized marketing efforts. This paper explores the application of the bi-gram Hidden Markov Optimization (bi-gram HMMO) model in sentiment analysis within e-commerce platforms in China. Leveraging natural language processing techniques, the bi-gram HMMO model captures intricate dependencies between consecutive words to discern user sentiment from textual data such as reviews and comments. Through a systematic analysis of user interactions, the model accurately identifies and categorizes emotional states, providing valuable insights into customer satisfaction levels and areas for improvement. BERT + JOA achieved a 95.6% accuracy and a 95.4% F1-score, outperforming traditional models like Bi-gram HMM (78.5%) and CNN + JOA (91.5%). Sentiment-aware marketing strategies lead to higher customer engagement, conversion rates, and revenue growth. After implementing sentiment-optimized strategies, customer conversion increased by 73.3% (from 4.5% to 7.8%), while customer retention improved by 24.9% (from 55.2% to 68.9%). Click-through rates (CTR) doubled (+103.1%), and sentiment-based product recommendations saw a 136.6% increase in engagement, indicating that customers respond better to personalized and emotionally intelligent marketing campaigns. Additionally, monthly revenue growth surged by 127.0%, while the average order value (AOV) increased by 33.4% (from $52.3 to $69.8), demonstrating that sentiment-aware marketing strategies directly influence e-commerce profitability. The findings reveal a prevalence of positive sentiment in user expressions, particularly towards product quality, delivery speed, and customer service, with probability scores ranging from 0.75 to 0.90. Conversely, instances of negative sentiment are associated with product defects or damaged items, yielding lower probability scores of 0.60 to 0.80.</p>Jin GaoXiang Li* Wenbo Ma
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2489Design and Fabrication of MEMS Area Changed Capacitive Accelerometer Using FDM 3D Printing Technique and Vibration Test based Mechanical Characterization
https://www.ubplj.org/index.php/tesd/article/view/2529
<p>This work presents the design, fabrication and testing of a MEMS (Microelectronics and Mechanical Systems) area-change capacitive type accelerometer. The need for rapid production of mechanical components and replacements has become increasingly important, driving the growth of fast prototyping and additive manufacturing across various industries, including sensor manufacturing. Authors used the FDM 3D Printing Technology for the design. A key requirement for FDM 3D printing is that all material dimensions must be in millimeters. Authors developed the area changed capacitive accelerometer and performed several simulation analyses. The fabricated device produced a displacement=2.13mm along X-axis, natural frequency=19.292 Hz, noise floor = 0.063 Hzng / and voltage outline=1.417 Volts for g=50. The cross axis sensitivity is 7.01% only even for very high acceleration range (0-50g).The simulation analysis are performed using COMSOL multiphysics software. The authors tested the 3D-printed device for frequency and displacement across all three axes using a vibration shaker, and most of the results showed close agreement with the theoretical values.</p>P.Balakrishna*Joseph Daniel RathnasamiY.V.Narayana
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2529Application of Genetic Algorithm in Optimizing Path Selection in Tourism Route Planning
https://www.ubplj.org/index.php/tesd/article/view/2506
<p>Path selection in tourism route planning involves optimizing travel routes to maximize tourist satisfaction and minimize travel time, cost, or other constraints. This task can be complex due to factors like visitor preferences, attraction availability, and travel schedules. Tourism planners employ algorithms, such as Genetic Algorithms (GA) or probability-based approaches, to identify efficient routes by analyzing large datasets. These algorithms evaluate potential paths based on criteria like distance, attraction variety, and user satisfaction, often adapting based on real-time data and user feedback to ensure optimal results. Dynamic programming and probabilistic models can further enhance path selection by consideSring changing conditions and transitional probabilities between destinations, providing tourists with tailored, flexible routes that meet their preferences while adhering to practical constraints. This paper investigates the application of Weighted Ranking Ant Colony Optimization (WRACO) in tourism route planning, aiming to enhance travel experiences by efficiently navigating the complexities of tourism landscapes. WRACO integrates a weighted ranking scheme into the Ant Colony Optimization (ACO) framework, biasing ant decision-making towards more attractive paths. Through a comprehensive analysis of simulation results, WRACO demonstrates its efficacy in iteratively refining travel itineraries, minimizing travel distances while ensuring convergence to optimal or near-optimal solutions. Through simulation, WRACO achieves a significant reduction in travel distances, with the best tour length minimized to 200 units over ten iterations. Comparative analysis with other optimization algorithms reveals WRACO's superiority, showcasing a notably low best tour length and high solution quality.</p>Lihua Yuan*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2506Financial Market Volatility Time Series Prediction and Volatility Adjustment Algorithm
https://www.ubplj.org/index.php/tesd/article/view/2487
<p>Time series analysis of financial market volatility involves examining historical price data to understand patterns, identify trends, and predict future fluctuations. This method utilizes statistical techniques and models, such as Autoregressive Conditional Heteroskedasticity (ARCH), Generalized ARCH (GARCH), and their variants, to capture the time-dependent nature of market volatility. Analysts focus on measuring risk, detecting anomalies, and understanding market reactions to external events, such as economic policies or geopolitical crises. This paper introduces a novel approach to financial market forecasting and volatility estimation using Time Series Optimized Deep Learning Forecasting (TSODLF) in Chinese Market. Leveraging the capabilities of deep learning and optimization algorithms, TSODLF offers a comprehensive framework for capturing complex temporal patterns and adapting to changing market conditions. Through a series of experiments and analyses, we demonstrate the effectiveness of TSODLF in accurately predicting future values of financial variables and estimating market volatility. TSODLF demonstrates its effectiveness in accurately predicting future values of financial variables. Through empirical analyses, our model achieves promising results, with mean absolute error (MAE) values ranging from 0.012 to 0.015 and root mean squared error (RMSE) values ranging from 0.018 to 0.022 across different forecasting and volatility estimation tasks. Additionally, TSODLF exhibits strong performance with adjusted R-squared values between 0.78 and 0.85, indicating its ability to explain a significant portion of the variability in the data. Through empirical analyses, our model achieves promising results, with mean absolute error (MAE) values ranging from 0.012 to 0.015 and root mean squared error (RMSE) values ranging from 0.018 to 0.022 across different forecasting and volatility estimation tasks. Additionally, TSODLF exhibits strong performance with adjusted R-squared values between 0.78 and 0.85, indicating its ability to explain a significant portion of the variability in the data.</p>Xiang Li Jin Gao* Wenbo Ma
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2487Stagnant Power Aware High Secure Digital Chaotic Pseudo Random Number Generator Using AAES
https://www.ubplj.org/index.php/tesd/article/view/2527
<p> Hardware Security plays a major role in most of the applications which include net banking, e-commerce, military, satellite, wireless communications, electronic gadgets, digital image processing, etc. Stagnant power refers to a state where the power generation or utilization within a system remains static, failing to adapt or improve in response to evolving demands, technologies, or environmental challenges. This stagnation can occur due to outdated infrastructure, lack of innovation, or insufficient policy support, leading to inefficiencies, energy losses, and suboptimal performance. In sectors such as renewable energy, industrial operations, and electrical grids, stagnant power hampers progress, limiting the ability to meet growing energy demands and sustainability goals. This paper presents a Proposed Sequence-Order Chaotic Pseudo Random Number Generator (PRNG) using AAES, which offers significant improvements in both security and efficiency over traditional PRNGs and conventional AES implementations. The proposed design achieves 100% success in the NIST SP800-22 randomness test, surpassing the 98% success rate of traditional PRNGs. It demonstrates an entropy of 0.9995, an improvement of 0.35% over conventional PRNGs, and a correlation coefficient close to 0, resulting in a 100% reduction in correlation when compared to traditional PRNGs. The AAES-based PRNG also features a 256-bit key space, doubling the security strength of conventional PRNGs that use a 128-bit key space. In terms of efficiency, the proposed PRNG achieves a 22.8% reduction in power consumption, using only 12.5 mW compared to 16.2 mW for conventional AES PRNGs. The area utilization is also reduced by 14.3%, requiring 1.8 mm² compared to 2.1 mm² in conventional AES designs. The throughput of the AAES-based PRNG is 400 Mbps, a 5.3% improvement over the traditional 380 Mbps throughput. Latency is reduced by 21.4%, achieving 22 ns compared to 28 ns in conventional AES. Security-wise, the AAES-based PRNG exhibits a high resistance to cryptographic attacks, with 99.9% improvement in differential cryptanalysis success rate and a 99.8% reduction in linear cryptanalysis bias. The key recovery time is improved by 20 orders of magnitude, with the proposed PRNG requiring approximately 10^50 years to break, compared to 10^30 years for traditional PRNGs. These results demonstrate the proposed AAES-based PRNG's superior security, efficiency, and suitability for cryptographic applications, particularly in resource-constrained environments like the Internet of Things (IoT).</p>B Satyaramamanohar A*Dr.T.BernatinDr. Tikkireddi Aditya KumarCh.Sridevi Dr. BH.V.V.S.R.K.K.Pavan Balla Mounica
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2527Scenic Route Planning of Online Rural Tourism Platform Based on Scientific Computing Visualization Algorithm
https://www.ubplj.org/index.php/tesd/article/view/2504
<p>Scenic route planning involves designing travel paths that prioritize visually appealing landscapes, cultural landmarks, and natural beauty, rather than solely focusing on the shortest or fastest routes. This approach enhances the overall travel experience by integrating elements such as picturesque views, serene environments, and points of interest along the journey. This paper presented scenic route planning for online rural tourism platforms using the Weighted Whale Optimization Scenic Route Optimization (WWOSRO) algorithm. By segmenting and evaluating routes based on scenic attractiveness and travel time, the WWOSRO algorithm offers travelers a diverse range of options tailored to their preferences. Through extensive experimentation, we demonstrate the algorithm's effectiveness in optimizing route selection, and balancing the allure of scenic landscapes with practical considerations. Integration of WWOSRO into online platforms enhances user engagement and satisfaction, providing immersive and personalized experiences. Through extensive experimentation, we showcase the algorithm's effectiveness in optimizing route selection, achieving an average increase of 15% in scenic attractiveness scores compared to traditional methods. By segmenting and evaluating routes based on scenic attractiveness and travel time, the WWOSRO algorithm offers travelers a diverse range of options tailored to their preferences. Integration of WWOSRO into online platforms enhances user engagement and satisfaction, with user feedback indicating a 20% increase in overall satisfaction ratings. Additionally, the optimization-driven approach promotes sustainability in tourism by reducing average travel time by 10%, thus minimizing environmental impact. This study contributes to advancing user-centric tourism practices and underscores the potential of WWOSRO as a valuable tool for enhancing rural tourism experiences.</p>Wenyue Hu*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2504Automated Lion Optimization Algorithm with Deep Transfer Learning Based Oral Cancer Detection and Classification Model
https://www.ubplj.org/index.php/tesd/article/view/2525
<p>Oral cancer (OC) recognition involves leveraging innovative technologies like imaging models and machine learning (ML) techniques to analyze oral cavity anomalies, helping in the initial analysis and enhancing treatment results. These new methods contribute to appropriate intervention and the probable for improved existence rates in individuals in danger of oral cancer. The normal analysis of oral cancer is the microscopic study of specimens detached especially over incisional biopsies of oral mucosa through a clinical spotted suspicious lesion. The use of deep learning (DL) methods is effective in many kinds of cancer; but, a restricted research study has been completed utilizing histopathological OSCC images. Unlike conventional ML, which needs physical feature removal and reflects area expertise, DL can mechanically remove features with an alteration from hand-designed to data-driven features. Despite the customary medical techniques employed in oral classification, automatic models dependent upon a DL framework display promising outcomes. Therefore, this article presents an automated lion optimization algorithm with a deep transfer learning-based oral cancer detection and classification (LOADL-OCDC) methodology. The main intention of the LOADL-OCDC technique is to recognize and categorize the occurrence of oral cancer into distinct classes. The LOADL-OCDC technique follows a multistage process. Initially, the LOADL-OCDC technique performs bilateral filtering-based noise elimination and CLAHE-based contrast improvement. Next, the EfficientNet model can be applied to learn complex and intrinsic feature patterns from the pre-processed images. In the presented LOADL-OCDC technique, the lion optimization algorithm (LOA) can be applied for fine-tuning the hyperparameters of the EfficientNet model. For cancer detection, the LOADL-OCDC technique applies a deep recurrent neural network (DRNN) system. A general experimental study is created to investigate the detection results of the LOADL-OCDC technique. The complete comparison study reported the supremacy of the LOADL-OCDC system in terms of different measures.</p>Sathishkumar R*Govindarajan M
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2525Target Detection In Wushu Competition Video Based On Kalman Filter Algorithm Of Multi-Target Tracking
https://www.ubplj.org/index.php/tesd/article/view/2502
<p>In modern Multi-Target Tracking, the integration of Kalmann Filterdata and visual space transformation (VST), enabled by digital Video processing technology, has revolutionized creative workflows. Kalmann Filterdata involves the representation of colors in standardized formats such as RGB, CMYK, or LAB, which ensures consistent color reproduction across digital and physical mediums. Visual space transformation (VST) further enhances this by mapping colors and spatial features from one representation to another, allowing seamless transitions between different design elements and contexts. This paper explores the application of Kalmann FilterData Fusion Multi-Target Tracking (CSDFGD) for Visual Space Transformation (VST) in Multi-Target Tracking. This study evaluates the effectiveness of CSDFGD through a comprehensive analysis of various metrics before and after its implementation. The results demonstrate significant improvements across key areas, including visual appeal, coherence, spatial accuracy, and computational efficiency. From smoother color transitions and harmonized color schemes to higher precision in scaling and rotation, CSDFGD proves to be a valuable tool for modern Multi-Target Trackingers. This paper presents an investigation into the effectiveness of Kalmann FilterData Fusion Multi-Target Tracking (CSDFGD) for Visual Space Transformation (VST) in Multi-Target Tracking. Traditional design methods often yield designs with moderate visual appeal and coherence due to limitations inherent in individual color spaces. However, through the implementation of CSDFGD, significant improvements have been observed. Before CSDFGD, designs scored 6 for visual appeal, with color richness and vibrancy rated at 6 and 5 respectively. Coherence metrics were lower, with color transitions and schemes scoring 4 and 5. Spatial accuracy was moderate, with scaling precision at 6 and rotation accuracy at 5. Processing time was high at 10 seconds, and real-time feasibility scored 3. After CSDFGD, designs showed remarkable enhancements, with visual appeal rising to 9, and color richness and vibrancy reaching 9 and 8 respectively. Coherence metrics saw substantial improvements, with color transitions and schemes scoring 9 and 8. Spatial accuracy significantly increased, with scaling precision and rotation accuracy reaching 9 and 8. Processing time reduced to 4 seconds, and real-time feasibility improved to 8.</p>Dr. Supraja Veerabomma Dr. M. Srinivasa Rao Dr. Putta Brundavani*Dr. Bepar Abdul Raheem
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2502SE-CSA based IN-D SG disturb free Volatile static memory using Fin-FET with Steep subthreshold sleep
https://www.ubplj.org/index.php/tesd/article/view/2520
<p>FinFET, or Fin Field-Effect Transistor, is a type of transistor used in modern semiconductor devices to overcome the limitations of traditional planar transistors as scaling continues in the nanometer range. Characterized by a three-dimensional structure, the FinFET features a "fin" shape that extends vertically from the substrate, allowing for better electrostatic control over the channel and reduced leakage currents. This design enhances performance, improves switching speed, and allows for lower power consumption compared to conventional transistors. This paper presents a novel approach to static random-access memory (SRAM) design utilizing Shorted Gate Fin-FET (SG-Fin-FET) and Independent Gate Fin-FET (IG-Fin-FET) technologies at a 7-nm process node. The proposed 8T SRAM cell architecture, which excludes read and write transistors, effectively addresses the limitations of traditional SRAM configurations by decoupling the read and write operations. This decoupling enhances the Static Noise Margin (SNM), which is measured to be approximately <strong>0.15 V</strong>, and significantly improves the Power Delay Product (PDP), achieving a reduction of <strong>40%</strong> compared to conventional SRAM designs. The Input Dependent (INDEP) technique employed reduces leakage power dissipation by nearly <strong>50%</strong>, thereby enhancing overall energy efficiency. Simulation results indicate a notable improvement in write capability, with a write margin of <strong>0.2 V</strong>. The segmented array design allows for larger memory arrays without sacrificing performance, supporting up to <strong>1 MB</strong> of data storage while maintaining an area efficiency of <strong>0.35 μm²</strong> per cell. The proposed methodology demonstrates superior flexibility in optimizing read and write functionalities, resulting in improved performance metrics suitable for high-density memory applications. The results affirm the viability of using SG-Fin-FET and IG-Fin-FET technologies for next-generation SRAM cells, offering significant advancements in both power efficiency and operational performance.</p>Shyam.K* V.Vijay Kumar
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2520Blockchain-Based Mass Spectrometry Data Processing and Feature Extraction in Drug Quality Control with Data Mining Model for Deep Learning Process
https://www.ubplj.org/index.php/tesd/article/view/2500
<p>Mass spectrometry data processing and feature extraction are vital in drug analysis, particularly for ensuring drug quality and safety. These techniques allow for the detailed identification and quantification of chemical compounds within pharmaceutical products, supporting the detection of impurities, active ingredients, and potential contaminants. Data mining algorithms enhance this process by sifting through large datasets, identifying patterns, and extracting key features that are critical for quality control. Algorithms such as clustering, classification, and anomaly detection can isolate relevant data points, aiding in precise compound characterization. By automating the extraction of critical features from complex spectra, data mining facilitates faster and more accurate quality assessment, reducing human error and enhancing compliance with regulatory standards. This paper introduces Ethereum Blockchain Clustering (EBC) as a novel approach to drug quality assessment in the pharmaceutical industry. Leveraging the capabilities of blockchain technology and mass spectrometry data analysis, EBC offers a transparent and decentralized platform for managing and analyzing drug quality attributes. Through a series of simulations and case studies, we demonstrate the effectiveness of EBC in categorizing drug samples, extracting relevant features, and assessing their potency, purity, and stability. The results highlight the potential of EBC to enhance transparency, traceability, and trust within the pharmaceutical supply chain, ultimately contributing to improved patient safety and healthcare outcomes. in our simulations, we observe mean drug potency values ranging from 97.8% to 99.2%, mean drug purity values ranging from 95.1% to 97.8%, and mean drug stability ranging from 23.2 to 25.8 months across different scenarios and clusters. These results highlight the potential of EBC to enhance transparency, traceability, and trust within the pharmaceutical supply chain, ultimately contributing to improved patient safety and healthcare outcomes.</p>Dr. T.Venkata Naga JayuduDr. Giribabu SadineniJajjara BhargavP.JayaselviDr.T.SunithaMasthan Rao Kale*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2500Segmentation of Blood Vessels using PCA and CNN
https://www.ubplj.org/index.php/tesd/article/view/2518
<p>Accurate segmentation of blood vessels in retinal images is a crucial task for diagnosing various eye diseases, including diabetic retinopathy, glaucoma, and macular degeneration. The complexity of retinal im- ages, characterized by varying illumination, noise, and low contrast, makes vessel segmentation challenging. This paper introduces a novel framework that integrates the feature extraction capabilities of Principal Component Analysis (PCA) with the segmentation power of Convo- lutional Neural Networks (CNNs). The proposed approach leverages PCA for dimensionality reduction and contrast enhancement, ensuring that essential features are retained while reducing computational complexity. CNNs are then employed to accurately segment blood vessels by learning spatial hierarchies and intricate vessel structures. Extensive experiments are conducted on the DRIVE dataset, which includes a diverse set of retinal fundus im- ages with manually annotated vessel masks. The proposed PCA-CNN model demonstrates significant improvements in segmentation accuracy, precision, and recall compared to traditional segmentation techniques. The model achieves an accuracy of 98percent and an IoU score of 0.85, outper- forming existing methods. Furthermore, the integration of PCA and CNNs enhances computational efficiency, making the approach suitable for large-scale medical imaging applications. By addressing the limitations of traditional segmentation methods, this work contributes to the advancement of automated retinal vessel segmentation. The insights gained from this study highlight the potential of combining di- mensionality reduction techniques with deep learning for enhanced medical image analysis, ultimately aiding in the early detection and diagnosis of retinal diseases.</p>Dr.N.C. Santosh Kumar*Dr. S. RaghuDr. Nagunuri RajenderSobiya Sabahat Kasetti SilpaP.Srinivas
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2518A Visual-Acoustic Modeling Framework for Robust Dysarthric Speech Recognition Using Synthetic Visual Augmentation and Transfer Learning
https://www.ubplj.org/index.php/tesd/article/view/2498
<p> Dysarthria is a motor speech disorder that affects an individual's ability to control his/her muscles, which seriously affects with their ability to communicate and to perform digital interaction. Automatic Speech Recognition (ASR) systems have made tremendous improvements but are limited to dysarthric speech, specifically in severe cases with they are unable to describe phonemes consistently. Also, it is provided with insufficient training data and unintuitive phoneme labelling. We present a visual acoustic modelling technique in a dysarthric-targeted ASR system. We suggest Speech Vision (SV). SV does not just depend on the audio but transforms the speech to visual spectrogram representations and trains the deep neural networks to identify the shape of the phoneme rather than the phoneme's variability when spoken. This reduces the solution from the traditional acoustic phoneme modelling that is required to address central dysarthric speech challenges. Specifically, to face data scarcity, SV uses visual data augmentation by producing synthesized dysarthric spectrograms from Generative Adversial Networks (GANs) and time-frequency distortions. Moreover, transfer learning is applied to utilize pre-trained healthy speech models to dysarthric speech for more robustness and generalization. We compare SV against the existing systems, DeepSpeech, DysarthricGAN-ASR, and Transfer-ASR, using the UA-Speech dataset. In 67% of the speakers, SV increased the accuracy of recognition by an average of 18.5%, with a significant reduction in average Word Error Rate (WER), particularly for severe dysarthria. By adopting visual learning, synthetic augmentation, and transfer learning in a single pipeline, SV is a new solution to overcome the problem of dysarthric ASR and potentially establishes ASR for speech-impaired populations with enhanced accessibility.</p>P.Hemalatha*Dr. K. Vinay KumarDr. Uppalapati SrilakshmiDr. Putta Brundavani
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2498Deep Learning based Attribute Identification for Deceit Prediction Using EEG Signal Analysis
https://www.ubplj.org/index.php/tesd/article/view/2515
<p>Lying detection is in the spotlight these days, particularly when it is applied to brain activity scanning. It is so because it may prove to be revolutionary for psychology, security, and even law enforcement agencies. It is one of the new techniques based on the detection of lies through examination of brainwave patterns derived from EEG signals using some very high-end techniques. Particularly, the approach applies Sample Entropy and Recurrence Quantification Analysis to give cues to Long Short-Term Memory networks. The belief is SampEn and RQA can pull out the non-linear, dynamic, and complex features of EEG data that are believed to be consistent triggers to indicate that someone is lying. To simplify the data, Independent Component Analysis simplifies the number of EEG channels from 16 to 12. To monitor the performance of the model, the researchers employed indicative metrics such as ROC curves, F1 score, precision, and recall. Trained on feature values based on SampEn and RQA, the LSTM model had an accuracy of 97.66%. To avoid having the model overfit the training data, training was stopped early at epoch 81. Its performance was also benchmarked against more sophisticated neural networks, such as Multi-Layer Spiking Neural Networks, and conventional signal processing techniques, such as Fourier transform and wavelet analysis. The findings indicate that the algorithm delivers a perfect balance between high accuracy and computational efficiency, and therefore is best suited for real-world applications of lie detection. Detection of deception from EEG activity is an artificial intelligence breakthrough and a cognitive neuroscience milestone. Compared to all the other lie detection techniques, such as the polygraph which search for body reaction possibly under duress, EEG machines record an initial indicator of brain activity. Sample Entropy and Recurrence Quantification Analysis sophisticated signal processing techniques are employed to navigate through the very intricate EEG signal. Canceling repeat noise and detecting critical information characteristics, such as algorithms can detect subtle trends of falsehoods</p>Revoori Swetha*Dr. Damodar Reddy Edla
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2515Factors Influencing the Cold Transportation Process of Waxy Shale Oil
https://www.ubplj.org/index.php/tesd/article/view/2495
<p>During the cold flow transportation (cold transportation) process of waxy crude oil, when the transportation temperature is below the wax precipitation point, wax deposition and pipe congealing phenomena can easily occur, affecting the safety of the transportation pipeline. By analyzing the microscopic wax crystal structure and size in the oil matrix under the cold transportation conditions of waxy shale oil, studying the rheological change rules of congealed oil during cold transportation, and combining basic physical properties with annular wax deposition simulation, a wax deposition kinetic model is developed. This model predicts the wax deposition rate and proposes safe transportation conditions and pipe cleaning requirements for cold transporting waxy shale oil. The results show that under cold transportation conditions, waxy shale oil forms needle-shaped microcrystalline wax with a length of less than 2 µm. Shear forces can disrupt the stable congealed oil structure and improve its rheological properties. Decreasing the initial temperature of cold transport helps reduce the yield stress of congealed oil. The kinetic model predicts a maximum wax deposition rate of no more than 0.23 mm/d. These findings can provide technical guidance for a flow safety design of the cold transportation process for waxy shale oil.</p>Hao WanMinfeng Tao Chengjie Xie Qiyu Huang Yan Liang Zhongqing TangJunlei Wang *
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2495An Optimal Framework for Intelligent Prediction of Prediabetes and Type-2 Diabetes Using Genomic Data
https://www.ubplj.org/index.php/tesd/article/view/2513
<p>The most important study in medical research is the prediction of type 2 diabetes in patients. As far as type 2 diabetes prediction models go, there are a number of options. Nevertheless, due to subpar quality, the desired outcome was not achieved. Nan characteristics in the gene data make type 2 diabetes prediction as complicated as possible. Scores for both performance and prediction were low due to the flaws. Consequently, the plan is to create a new method for predicting type 2 diabetes using chimpanzees and functional link neural architecture (CbFLNA). Methods such as feature selection, categorization, and gene expression have been carried out as part of the pre-processing methodology. The first steps were to import the genomic database, preprocess the data, and extract the meaningful features. Next, sort the individuals' illnesses according to the likelihood that they have type 2 diabetes. When all was said and done, the model's performance was evaluated, and it achieved a very high accuracy score in the prediction.</p>Sreenivas Pratapagiri*Shanker ChandreDr. Balaji Maram
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2513Application Of Deep Learning Technology In Global Electronic Information Management And Evaluation Under The Perspective Of International Trade Law
https://www.ubplj.org/index.php/tesd/article/view/2493
<p>Global Electronic Information Management (GEIM) plays a crucial role in streamlining data processing, enhancing security, and ensuring regulatory compliance in international trade. With the increasing volume of cross-border transactions, efficient information management systems are essential to handle trade documentation, legal compliance, fraud detection, and secure communication between trade entities. This paper presents HCEIM-DL (Hidden Chain Ethereum Information Management with Deep Learning) as an advanced framework for electronic information management and international trade law enforcement. By integrating deep learning, blockchain, and AI-driven compliance systems, HCEIM-DL significantly enhances trade security, fraud detection, compliance accuracy, and processing efficiency. The model achieves a trade compliance accuracy of 96.8%, fraud detection rate of 95.4%, and legal contract verification accuracy of 99.3%, ensuring robust regulatory adherence. With a transaction processing speed of 7,200 TPS, HCEIM-DL outperforms traditional systems by 85.7%, enabling faster and more efficient trade operations. The model also improves data transparency to 99.2%, reducing the risk of legal disputes, and cuts compliance costs by 78.6%, making global trade more affordable. Additionally, customs clearance efficiency increases to 95.8%, reducing trade delays, while dispute resolution time decreases by 66.7%, from 45 days to just 10 days. Energy consumption per transaction is optimized with a 37.8% reduction, ensuring a sustainable and scalable system. These results highlight HCEIM-DL as a transformative approach to trade law enforcement, enhancing security, efficiency, and compliance while reducing costs and risks in international trade.</p>Jin GaoXiang Li* Wenbo Ma
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2493A Capsule Network-Based Hybrid Deep Learning Model For Efficient Prediction Of Crispr-Cas9 Off-Target Effects
https://www.ubplj.org/index.php/tesd/article/view/2533
<p>CRISPR-Cas9 genome editing has transformed biomedical research and the development of therapies, yet the challenge of unintended off-target effects remains a significant obstacle to its clinical use. In this study, we present a new deep learning model that combines Capsule Networks with Transformer blocks, bidirectional LSTM layers, and CNNs. The model is further strengthened by incorporating k-mer encoded sequence features and biological rule checks to predict CRISPR-Cas9 off-target activity with greater accuracy. It processes guide and off-target DNA sequences through a hybrid pipeline, which includes convolution, temporal modelling, attention-based representation learning, and spatial hierarchy encoding using capsule layers. At the same time, the model extracts and analyses numerical features, such as mismatch counts, GC content, and PAM motif patterns. To improve reliability, we introduce a biological constraint layer that filters predictions based on well-established domain knowledge. The final predictions result from integrating these various feature representations. Our results show that this biologically-informed architecture significantly enhances both sensitivity and specificity in off-target prediction, indicating its potential to improve the safety and design of CRISPR experiments.</p>Hamsika Chakilam*Kishore Kumar TTekyam Krishna Kumar Naidu
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2533Between Companionship and Alienation in Literature and Cinema: Reimagining Personal Law through Mahasweta Devi’s Story “The Divorce (Talaq)” and B.R. Chopra’s “Nikaah”
https://www.ubplj.org/index.php/tesd/article/view/2511
<p>It is evident that literature documents the incidents of the society, shaped in the form of variety of genres. The evolution of mankind is found in a well weaved genres in the pages of literature. Indeed, through these genres mankind witnesses the society once it was. It is evident that literature educates the society with rational and irrational practices and eventually, it brings in the transformation of the society. Mahasweta Devi’s story “<em>The Divorce (Talaq)</em>” and Achala Naga’s play “<em>Nikaah</em>” was written according to the then societal practices of a community “the Triple Talaq” which is now the concern of the present Indian law. The Indian Constitution epitomizes the mutual vision of the people of India. It has set laws considering sectarian concerns. More specifically, the Fundamental rights of Indian citizens are much sought after topic of discussion on distinguished platforms. Nevertheless, under protection of Rights on Marriage, a Bill was passed in 2019 in protection of Muslim Women (citizens) Rights on Marriage. The Muslim Women Bill in other words ‘Triple Talaq’ confirms that it’s a criminal offence. Does this Bill in reality accepted by all? or Does victim choose to a medial solution to Talaq? The present research throws light on these questions and also the need of the Women Bill and the consequences faced by the Muslim women before and after the Bill referring to Mahasweta Devi’s story “The Divorce (Talaq)” and B.R. Chopra’s Hindi cinema “Nikaah” which is based on Achala Nagar’s play.</p>Dr. B. Anitha*R. Subhan Tilak Basha
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2511Student Evaluation Based on Association Rule Reinforcement Learning for Teaching Quality Assurance
https://www.ubplj.org/index.php/tesd/article/view/2490
<p>Optimizing university Teaching quality teaching student evaluation through association rule mining leverages data-driven insights to enhance instructional effectiveness. By analyzing student interactions, performance, and engagement patterns, association rule mining identifies key relationships among course components, such as content type, activity sequence, and learning outcomes. This approach enables educators to tailor course structures to improve student engagement and comprehension, ensuring that blended learning elements are effectively aligned to maximize educational impact and address diverse learning needs. This paper presents an exploration of the Query Swarm Blended Teaching Association Rule (QSBTAR) algorithm and its application in optimizing student evaluation with blended learning environments in China. The proposed model with the data mining approach for the examination of Student performance. The QSBTAR extracts valuable insights from educational data to establish associations between teaching components and student outcomes. Through the analysis of association rules generated by QSBTAR, the paper elucidates the intricate relationships between various instructional elements and key performance metrics such as quiz scores, participation rates, and exam performance. Subsequently, these insights are integrated into course design, facilitating improvements in student engagement, comprehension, and satisfaction. While the algorithm showcases promising results, considerations are given to its limitations, including data quality constraints and interpretability challenges.</p>Chaonan Xu Lin Xu*Wenbo Ma
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2490Network Intrusion Detection System using Stacked Ensemble Model with SVM SMOTE oversampling and Recursive Feature Elimination
https://www.ubplj.org/index.php/tesd/article/view/2531
<p>Intrusion Detection Systems (IDS) are essential for safeguarding networks against malicious activities, but traditional IDS models struggle with challenges such as class imbalance, high false alarm rates, and poor generalization. While machine learning (ML)-based IDS offer improvements, single classifier models suffer from bias, variance, and limited robustness. To address these limitations, this study proposes a Non-evolutionary Feature Selection-based Network Intrusion Detection System using Stacked Ensemble Learning (NFSNIDS). The proposed workflow begins with data preprocessing, where SVM SMOTE oversampling balances class distribution, Local Outlier Factor (LOF) outlier detection removes anomalies, Recursive Feature Elimination (RFE) selects relevant features, and Robust Scaler ensures effective data normalization. The processed data is then fed into a Stacked Ensemble Learning model comprising Extreme Gradient Boosting (XGB) and Extra Trees (ET) as base classifiers. Their outputs are used to create a new training set for a meta-classifier, which is trained using Logistic Regression to enhance predictive performance. The model is validated using 10-fold cross-validation, with Accuracy and F1-score as key performance metrics. Comparative evaluations against single classifiers, existing ensemble models, and benchmark IDS solutions confirm that NFSNIDS consistently outperforms all alternatives, making it a highly effective and robust approach for network intrusion detection.</p> <p><strong> </strong></p>Anil Kumar Dasari*Dr.Saroj Kumar BiswasProf.Biswajit PurkayasthaMd Sajjad Hossain
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2531Deep Neural Strategies for Uncovering Climbing Elements and Mapping Route Patterns
https://www.ubplj.org/index.php/tesd/article/view/2509
<p>This work presents a new deep learning architecture specifically designed to extract and structure climbing features in indoor settings. By building on two neural models, one pair wise similarity and one triplet comparison, the approach is learned to discriminate between climbing holds belonging to the same route and those that do not. In contrast with more conventional color clustering methods, our method is superior in accuracy and robustness, albeit with the requirements of complex manual annotation and higher computational demands. The findings emphasize the promise of deep neural networks to transform automated route mapping within climbing environments.</p>Srinivasa Sai Abhijit Challapalli*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2509Application Of Mobile Learning Based On Artificial Intelligence In Student Open Teaching Strategy
https://www.ubplj.org/index.php/tesd/article/view/2488
<p>Mobile learning has become a crucial component of modern teaching strategies, offering flexibility, accessibility, and personalized learning experiences. By integrating artificial intelligence (AI) with mobile learning, educators can enhance student engagement, improve content retention, and facilitate adaptive learning paths. AI-powered mobile learning platforms provide personalized recommendations, real-time feedback, and gamification elements to make learning more interactive and effective. Additionally, features such as collaborative learning, AI-moderated discussions, and secure authentication mechanisms ensure a seamless and secure learning environment. In this paper analyzed the impact of OTP-HML-AI (One-Time Password Hashing Mobile Learning with AI) on teaching strategies, student engagement, learning retention, and security in mobile-based education. The research analyzes student performance before and after the implementation of AI-powered teaching strategies, demonstrating significant improvements across key metrics. Student engagement increased from 60% to 85% (+25%), learning retention improved from 55% to 80% (+25%), and average assessment scores rose from 65% to 83% (+18%). AI-driven content recommendations were highly effective, with usage increasing from 40% to 88% (+48%), while real-time feedback improved learning efficiency by 40%. Additionally, collaborative learning participation grew from 52% to 70% (+18%), highlighting AI’s role in fostering teamwork and peer interactions. The implementation of secure OTP authentication ensured 100% success in login attempts, reducing failed login attempts from 12% to 3% (-9%) and unauthorized access attempts from 8% to 1% (-7%), while student satisfaction with security increased from 60% to 95% (+35%). Among various AI-powered teaching methodologies, AI-personalized learning achieved a 90% engagement rate, AI-based content recommendations 88%, and real-time AI feedback 85%, proving the effectiveness of adaptive and data-driven learning approaches.</p>Xiangjun ZhangPanpan Wang Wei Guo*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2488Content Based Secure Recommender System for Big Data Analytics
https://www.ubplj.org/index.php/tesd/article/view/2528
<p>In today's digital era, recommender systems play a pivotal role in enhancing user experiences across various domains, including healthcare. Big data analytics in healthcare refers to the use of advanced data analysis techniques to extract meaningful insights, patterns, and knowledge from large and complex healthcare datasets. Recommender systems play a valuable role in the context of big data analytics in healthcare by helping to make sense of the vast amount of data available and improving the quality of healthcare services. Recommender systems assist healthcare professionals by providing evidence-based recommendations for diagnosis and treatment. Security is a critical issue in big data analytics in healthcare due to the sensitive nature of healthcare data. Healthcare data often includes personally identifiable information (PII), and unauthorized access to this information can lead to identity theft and privacy breaches. Implementing strict access controls and encryption measures is essential. Hence, this paper proposed the Content Service Ensemble Recommender System (CSERS) model uses the content based collaborative filtering with the blowfish algorithm for the security features. The proposed CSERS model uses the big data analytics model with the tokened boost stamping feature extraction model with the computation of polarity. The secure recommednder model for the healthcare monitoring of the patient in healthcare application is evaluated for the computation of satisfaction level of patients. To achieve the desire security in the Heakthcare data Blowfish cryptographic process is implemented for the security. Initially, the performance of the proposed CSERS' scalability by examining its performance across different dataset sizes and transaction volumes. The findings reveal CSERS' ability to efficiently process data loads of varying magnitudes, making it adaptable to the demands of real-world healthcare environments. Second, the paper CSERS' security assessment capabilities, highlighting its proactive approach to security level estimation based on file size. This feature enhances data integrity and user trust, critical considerations in healthcare content recommendation systems. Furthermore, the research investigates CSERS' sentiment analysis capabilities, showcasing strong correlations between sentiment aspects such as professionalism, communication, and overall care. These correlations align with the system's target sentiment, indicating its effectiveness in tailoring recommendations to meet users' preferences and sentiments. Lastly, the study evaluates CSERS' performance in attack classification, where it excels by consistently achieving high accuracy, precision, recall, and ROC-AUC percentages. For instance, CSERS achieves an accuracy of 97.8% for datasets of 50 MB, underscoring its reliability in identifying security threats accurately. In conclusion, this paper underscores CSERS as a versatile and powerful system for content recommendation in healthcare applications. Its ability to enhance user experiences while ensuring security and trustworthiness makes it a valuable asset in today's data-driven healthcare landscape.</p>Dr. Veera Ankalu VuyyuruDr. Uppalapati SrilakshmiDr.Sreedhar BhukyaDr. Putta Brundavani*Dr. K. Vinay Kumar
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2528Research On Evaluation Technology of College Students 'Physical Quality Based on Bee Colony Optimization Algorithm
https://www.ubplj.org/index.php/tesd/article/view/2505
<p>The physical quality of college students is a critical aspect of their overall health and well-being, reflecting their physical strength, endurance, flexibility, speed, and coordination. It serves as an important foundation for academic performance, mental health, and future lifestyle habits. This paper proposes an optimized deep learning framework for evaluating the physical quality of college students by integrating Ant Colony Optimization (ACO) for feature selection and Bee Colony Optimization (BCO) for hyperparameter tuning. A dataset comprising physical indicators such as BMI, 50m sprint, endurance run, sit & reach, and strength test was analyzed for 10 students, with labels classified into four categories: Excellent, Good, Average, and Poor. ACO effectively selected the five most relevant features while eliminating less impactful ones such as pulse rate and height, resulting in a more focused input set. The BCO algorithm was used to optimize key hyperparameters of the deep learning model, including the learning rate (optimized from 0.01 to 0.001), batch size (64 to 32), and dropout rate (0.5 to 0.3), while increasing the number of hidden layers (2 to 3) and neurons per layer (64 to 128). These optimizations led to significant improvements in classification performance, with accuracy increasing from 84.5% to 92.3%, precision from 83.2% to 91.0%, recall from 85.0% to 93.4%, and F1-score from 84.1% to 92.2%. Additionally, training time was reduced from 180 seconds to 125 seconds. Results across 50 training epochs showed consistent metric improvements, confirming the model’s convergence and stability.</p>Zhuo Bi Yinglong Zhang*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2505Study on the Impact of Internal and External Governance Characteristics on Corporate R&D Investment-Based on China GEM Listed Companies
https://www.ubplj.org/index.php/tesd/article/view/2486
<p>Researching how different aspects of corporate governance affect R&D spending is both theoretically and practically relevant. Businesses in the high technology sector have been in the spotlight recently due to their status as icons of innovation. Modern Chinese high-tech businesses face intense competition in order to stay afloat and grow. One way they are adapting to this environment is by focusing their efforts on improving their own competitiveness through in-house innovation. Yet, domestic and international academics have given little attention to the connection between company internal and external governance and R&D investment activities, leading them to contradictory findings. The influence of internal and external governance on businesses is too complicated, with both good and negative consequences, and the impact route of corporate governance on enterprises cannot be reduced to a single variable for measurement. For this reason, it is important to investigate the following questions in the context of China's current economic transition: i. does corporate governance affect corporate R&D investment? ii. what the inner mechanism is behind the impact of corporate internal and external governance on R&D investment activities? iii. what is the relationship between corporate internal governance and R&D investment activities? This paper proposes relevant assumptions after summarising and analysing the research conclusions of other scholars and sorts out missing data based on the data of the annual reports of China GEM manufacturing listed companies from 2011 to 2020 in order to measure the specific impact of corporate governance and industry competition on enterprise R&D investment and the regulatory role of industry competition on the relationship between corporate governance and R&D investment. The author conducts an optimistic analysis and draws the following conclusions: First, from an external perspective, the intensity of industry competition has a major impact on the R&D investment decisions of firms, and vice versa. Second, from within the corporation, state-holding enterprises have greater leeway to invest in research and development than non-holding enterprises do, and the merger of the chairman and general manager roles can boost R&D spending. The third point is that the link between state-holding, position combination, and R&D investment will not be regulated by industry competition.</p>Zheyuan Liu*
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2486NLP Hybrid Deep Learning Model for E-Learning System Prediction Classifier System
https://www.ubplj.org/index.php/tesd/article/view/2526
<p>Natural Language Processing (NLP) with deep learning transforms how machines understand and generate human language by leveraging powerful neural networks, such as Recurrent Neural Networks (RNNs) and Transformers. Deep learning models in NLP can process vast amounts of unstructured text data, enabling highly accurate tasks like sentiment analysis, machine translation, text summarization, and question-answering. Course construction refers to designing, organizing, and implementing educational content to achieve specific learning objectives. A Learning Management System (LMS) integrated with deep learning in e-learning revolutionizes personalized education by analyzing student data to provide customized learning paths and experiences. Deep learning models in the LMS can assess a student's learning style, pace, and areas of difficulty by processing large volumes of user interactions, such as quizzes, assignments, and engagement metrics. The proposed n-gram-LMS-MC-DL model integrates n-gram language modeling, Markov Chain, and Deep Learning (DL) techniques to enhance the prediction accuracy in Learning Management Systems (LMS). This hybrid approach aims to predict students' next learning states based on their interactions within the LMS. The system achieves significant improvements in prediction accuracy across various learning stages. For instance, the n-gram-LMS-MC-DL model outperforms both the standalone Markov Chain and Deep Learning models, reaching an average accuracy of 92.82%, compared to 85.52% for Deep Learning and 77.78% for the Markov Chain. In individual stages, the model predicts with 92.1% accuracy for transitioning from "Lesson Completed" to "Quiz Started" and 95.2% accuracy for progressing from "New Lesson" to "Discussion." In addition to enhanced accuracy, the system maintains high precision (average 0.87), recall (average 0.85), and F1-score (average 0.855) across various learning activities, with manageable time complexities ranging from 120 ms to 155 ms.</p>Dr. Sudha VemarajuDr.K. SarvaniDr. Satya Vani BethapudiDr. Venkateswarlu Chandu*Dr.Ch. SahyajaDr.K. Kiran Kumar VarmaAnkam Dhilli Babu
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2526Power Signal Processing and Feature Extraction Algorithms Based On Time-Frequency Analysis
https://www.ubplj.org/index.php/tesd/article/view/2503
<p>Feature extraction in power signal processing plays a crucial role in accurately identifying and classifying various power quality disturbances. Power signals are often non-stationary and complex, containing both transient and steady-state components, which necessitates the extraction of meaningful features that capture their underlying characteristics. In this process, features are derived from multiple domains—time, frequency, and time-frequency—to ensure a holistic representation of the signal behavior. Time-domain features such as mean, standard deviation, skewness, kurtosis, root mean square (RMS), and entropy help in capturing statistical variations and signal energy fluctuations. Frequency-domain features like Total Harmonic Distortion (THD), spectral centroid, and spectral entropy provide insights into harmonic content and frequency distribution, which are critical for detecting distortions and resonances in the power system. This paper proposes an efficient and intelligent framework for power signal classification using a Stacked Whale Optimization-based Machine Learning (SWO-ML) model. The approach combines robust feature extraction from time, frequency, and time-frequency domains with advanced optimization and classification techniques to enhance power quality assessment. A total of 13 features were extracted, including statistical, spectral, and wavelet-based parameters, from different signal conditions such as normal, fault, transient, harmonic distortion, and load switching. The SWO algorithm was employed to select the most informative 18 features out of the initial pool, significantly reducing dimensionality while maintaining high discriminative performance. The proposed SVM + SWO model achieved a classification accuracy of 96.8%, precision of 96.2%, recall of 95.9%, and an F1-score of 96.0%, outperforming baseline models such as SVM without optimization (90.2%), SVM + PSO (93.1%), and SVM + GA (92.4%). In addition, the training time was reduced to 1.85 seconds, showcasing the computational efficiency of the system. Performance evaluation over 100 training epochs showed stable learning with final validation accuracy reaching 98.2% and a minimal loss of 0.05. The results confirm that the SWO-ML framework is highly effective for intelligent, real-time classification of power signals, offering promising applications in power system monitoring, smart grid stability, and fault diagnosis.</p>Dr. Supraja Veerabomma Dr. K.Chinna Kullayappa Dr. Kethepalli MallikarjunaDr. Putta Brundavani* Dr. P.Vijaya KumarDr. Bepar Abdul Raheem
Copyright (c) 2025 Technologies and Their Effects on Real-Time Social Development
https://creativecommons.org/licenses/by-nc/4.0
2025-06-232025-06-23167A2 (S)10.5750/sijme.v167iA2 (S).2503