Biography
Dr. Subhi Rafeeq Mohammed Zeebaree is a Professor in Computer Engineering. He is the Director of the Culture and Student Activities Center at Duhok Polytechnic University (DPU). He got his (BSc, MSc, and Ph.D.) degrees from the University of Technology-Baghdad-Iraq in (1990, 1995, and 2006) respectively. He started teaching and supervising post-graduate courses and students (Ph.D. and MSc) in 2007 and until now. More than thirty of his Ph.D. and MSc students completed their studding and got their degrees under his supervision. Currently, there are several Ph.D. and MSc students under his supervision. He has joint Ph.D. supervision with UTM (Malaysia), Firat University (Turkey), and EMU (Cyprus). He was the Chairman of the Scientific and Research Advices Committee of DPU for five years. He is a member of the IEEE Iraq section. He participated in more than fifteen international IEEE conferences. He was the chair of three international conferences sponsored by IEEE institution (ICOASE2018, ICOASE2019, and ICOASE2020) performed by both Duhok Polytechnic University (DPU) and the University of Zakho (UoZ). He has more than 150 papers and books published by (Clarivate, SCOPUS, DOAJ, and DOI) indexed journals and conferences. His Official Email: [email protected]
Education
Ph.D.
Computer Engineering
University of Technology - Baghdad
M.Sc.
Computer Engineering
AL-Rasheed College of Engineering and Science
B.Sc.
Electrical and Electronics Engineering
AL-Rasheed College of Engineering and Science
Title
Professor
Assistant Professor
Lecturer
Assistant Lecturer
Skills
Enterprise Systems Modeling
E-business and Web Design,Semantic Web, Distributed Systems, Cloud Computing,
Internet Technology,
Advanced Operating Systems, Distributed Operating Systems, Parallel Processing
TCP/IP,LAN,FTP,DNS,DHCP,IIS,OPNET,and Cisco..etc
Software Engineering, Systems Software
Computer Architecture, Digital Logic Circuits Design, Microprocessors Techniques, Electrical and Electronics Circuits
Data Structures and OOP, Description Languages (Z, and Z++), Programming Languages (Assembly, Basic, VB, Pascal, Delphi, C++, VC++, VC#, and Java)
Publication Journal
Comprehensive survey of IoT based arduino applications in healthcare monitoring
International Journal of Health Sciences (Issue: s3) (Volume: 6)
The Internet of Things (IoT) enables a range of applications in the area of information technology, one of which is linked and smart health care. A successful Monitoring patients' health state in real time through an IoT healthcare system is the goal, to avert critical patient scenarios, and to enhance patient quality of life via smart IoT settings. Additionally, healthcare expenses are reduced, and patient outcomes are improved. Edge devices (glucose monitors, ventilators, pacemakers) are common in IoT systems. etc.). We reviewed the essential methodologies and cutting-edge technology for remote patient monitoring in this paper, taking into consideration data privacy and security concerns. The primary emphasis of this study is on the many solutions suggested in healthcare for IoT in real-time remote patient monitoring, blockchain technology in healthcare, and data security.
Android Mobile Applications Vulnerabilities and Prevention Methods: A Review
IEEE Xplore
The popularity of mobile applications is rapidly increasing in the age of smartphones and tablets. Communication, social media, news, sending emails, buying, paying, viewing movies and streams, and playing games are just a few of the many uses for them. Android is currently the most popular mobile operating system on the planet. The android platform controls the mobile operating system market, and the number of Android Mobile applications grows day by day. At the same time, the number of attacks is also increasing. The attackers take advantage of vulnerable mobile applications to execute malicious code, which could be harmful and access sensitive private data. Security and privacy of data are critical and must be prioritized in mobile application development. To cope with the security threats, mobile application developers must understand the various types of vulnerabilities and prevention methods. Open Web Application Security Project (OWASP) lists the top 10 mobile applications security risks and vulnerabilities. Therefore, this paper investigates mobile applications vulnerabilities and their solutions.
Trajectory tracking of differential drive mobile robots using fractional-order proportional-integral-derivative controller design tuned by an enhanced fruit fly optimization
Measurement and Control
This work proposes a new kind of trajectory tracking controller for the differential drive mobile robot (DDMR), namely, the nonlinear neural network fractional-order proportional integral derivative (NNFOPID) controller. The suggested controller’s coefficients comprise integral, proportional, and derivative gains as well as derivative and integral powers. The adjustment of these coefficients turns the design of the proposed NNFOPID control further problematic than the conventional proportional-integral-derivative control. To handle this issue, an Enhanced Fruit Fly Swarm Optimization algorithm has been developed and proposed in this work to tune the NNFOPID’s parameters. The enhancement achieved on the standard fruit fly optimization technique lies in the increased uncertainty in the values of the initialized coefficients to convey a broader search space. subsequently, the search range is varied throughout the updating stage by beginning with a big radius and declines gradually during the course of the searching stage. The proposed NNFOPID controller has been validated its ability to track specific three types of continuous trajectories (circle, line, and lemniscate) while minimizing the mean square error and the control energy. Demonstrations have been run under MATLAB environment and revealed the practicality of the designed NNFOPID motion controller, where its performance has been compared with that of a nonlinear Neural Network Proportional Integral Derivative controller on the tracking of one of the aforementioned trajectories of the DDMR.
Rider weed deep residual network-based incremental model for text classification using multidimensional features and MapReduce
PeerJ Computer Science
Increasing demands for information and the rapid growth of big data have dramatically increased the amount of textual data. In order to obtain useful text information, the classification of texts is considered an imperative task. Accordingly, this article will describe the development of a hybrid optimization algorithm for classifying text. Here, pre-processing was done using the stemming process and stop word removal. Additionally, we performed the extraction of imperative features and the selection of optimal features using the Tanimoto similarity, which estimates the similarity between features and selects the relevant features with higher feature selection accuracy. Following that, a deep residual network trained by the Adam algorithm was utilized for dynamic text classification. Dynamic learning was performed using the proposed Rider invasive weed optimization (RIWO)-based deep residual network along with fuzzy theory. The proposed RIWO algorithm combines invasive weed optimization (IWO) and the Rider optimization algorithm (ROA). These processes are carried out under the MapReduce framework. Our analysis revealed that the proposed RIWO-based deep residual network outperformed other techniques with the highest true positive rate (TPR) of 85%, true negative rate (TNR) of 94%, and accuracy of 88.7%.
Design a Clustering Document based Semantic Similarity System using TFIDF and K-Mean
IEEE Xplore
The continuing success of the Internet has led to an enormous rise in the volume of electronic text records. The strategies for grouping these records into coherent groups are increasingly important. Traditional text clustering methods are focused on statistical characteristics, with a syntactic rather than semantical concept used to do clustering. A new approach for collecting documentation based on textual similarities is presented in this paper. The method is accomplished by defining, tokenizing, and stopping text synopses from Wikipedia and IMDB datasets using the NLTK dictionary. Then, a vector space is created using TFIDF with the K-mean algorithm to carry out clustering. The results were shown as an interactive website.
Comprehensive Study of Moving from Grid and Cloud Computing Through Fog and Edge Computing towards Dew Computing
IEEE Xplore
Dew Computing (DC) is a comparatively modern field with a wide range of applications. By examining how technological advances such as fog, edge and Dew computing, and distributed intelligence force us to reconsider traditional Cloud Computing (CC) to serve the Internet of Things. A new dew estimation theory is presented in this article. The revised definition is as follows: DC is a software and hardware cloud-based company. On-premises servers provide autonomy and collaborate with cloud networks. Dew Calculation aims to enhance the capabilities of on-premises and cloud-based applications. These categories can result in the development of new applications. In the world, there has been rapid growth in Information and Communication Technology (ICT), starting with Grid Computing (GC), CC, Fog Computing (FC), and the latest Edge Computing (EC) technology. DC technologies, infrastructure, and applications are described. We’ll go through the newest developments in fog networking, QoE, cloud at the edge, platforms, security, and privacy. The dew-cloud architecture is an option concerning the current client-server architecture, where two servers are located at opposite ends. In the absence of an Internet connection, a dew server helps users browse and track their details. Data are primarily stored as a local copy on the dew server that starts the Internet and is synchronized with the cloud master copy. The local dew pages, a local online version of the current website, can be browsed, read, written, or added to the users. Mapping between different Local Dew sites has been made possible using the dew domain name scheme and dew domain redirection.
Ultra-Dense Request Impact on Cluster-Based Web Server Performance
IEEE Xplore
The objective of this study is to evaluate cluster-based performance under ultra-dense HTTP traffic. This paper provides a performance analysis of the existing load balancing algorithms (Round Robin, Least Connection, and IP-Hash/Source) in cluster-based web servers. The performance testing process is operated with Apache-Jmeter 5.1 and distributing technique to realize ultra-dense load (100000-500000 HTTP requests) in a real network. Generally, the results indicated the proposed Nginx-based cluster is more responsive, stable, and consumes fewer resources concerning Response Time, Throughput, Standard Deviation, and CPU Usage measurements. While in terms of Error Rate, the Apache-based cluster is more efficient. Moreover, with the Nginx-based cluster, the Round Robin algorithm provided slightly better performance. In contrast, the IP-Hash algorithm outperformed the other two algorithms for the Apache-based cluster in terms of all the utilized metrics.
Cloud-based Parallel Computing System Via Single-Client Multi-Hash Single-Server Multi-Thread
IEEE Xplore
Parallel Distributed Processing is a relatively new method. Distributed cloud connects data and applications delivered by utilizing cloud computing technologies from several geographical locations. When something is distributed in IT, it is transferred among multiple systems located across diverse areas. The expanse of information besides the interval it takes towards analyzing adding to monitoring projected results efficiently and effectively has been dramatically enhanced. This paper presented a system to assist users in doing composite tasks interactively with the least processing time. Distributed-Parallel-Processing (DPP) and Cloud Computing (CC) are the two most great technologies, can process and answer the user problem quickly. Developing a suggested system used several sources (source generators, under-test load, computing machines, and processing units) and (webservers) through the cloud. Hash codes are generated on the client-side and sent towards the webserver. The webserver delivers these does to cracking servers that have been specified. It has been verified that while employing light load (single hash-code) with multi-servers and multiprocessors, the suggested system gives improved efficiency (in terms of Kernel-burst, User-bust, and Total-execution) timings. Although it has been demonstrated that employing large loads (many under-testing codes) with many computing machines using multiprocessors improves the system’s performance. The suggested computational system outperforms it in terms of parallel processing. The proposed method took these situations into account due to the code-breaking influences by two precarious criteria (minimal breaking-time besides cost-effective usage of computer resources).
A Review on Automation Artificial Neural Networks based on Evolutionary Algorithms
IEEE Xplore
The biological human brain model was used to inspire the idea of Artificial Neural Networks (ANNs). The notion is then converted into a mathematical formulation and then into machine learning, which is utilized to address various issues throughout the world. Moreover, ANNs has achieved advances in solving numerous intractable problems in several fields in recent times. However, its success depends on the hyper-parameters it selects, and manually fine-tuning them is a time-consuming task. Therefore, automation of the design or topology of artificial neural networks has become a hot issue in both academic and industrial studies. Among the numerous optimization approaches, evolutionary algorithms (EAs) are commonly used to optimize the architecture and parameters of ANNs. We review several successful, well-designed strategies to using EAs to develop artificial neural network architecture that has been published in the last four years in this paper. In addition, we conducted a thorough study and analysis of each publication. Furthermore, details such as methods used, datasets, computer resources, training duration, and performance are summarized for each study. Despite this, the automated neural network techniques performed admirably. However, the long training period and huge computer resources remain issues for these sorts of ANNs techniques.
Design and Analysis of Intelligent Energy Management System based on Multi-Agent and Distributed IoT: DPU Case Study
IEEE Xplore
Population growth and the creation of new equipment are accompanied by a constant increase in energy use each day and have created significant consumer issues in energy management. Smart meters (SM) are simply instruments for measuring energy usage and are a significant resource of the evolving technological energy management system. Including precise billing data, information on usage at the user end, installation of two-way communication. SM is the critical component of an intelligent power grid. The Internet of Things (IoT) is a critical partner in the power business leading to intelligent resource management to ensure successful data collection and use. This paper proposes designing and analyzing intelligent energy management systems based on Multi-Agent (MA) and Distributed IoT (DIoT). An efficient approach is proposed to monitor and control power consumption levels of the proposed case study of Duhok Polytechnic University (DPU). DPU consists of Presidency, six colleges, and eight institutes. These fifteen campuses are distributed through a wide geographical area with long distances between each campus (i.e., more than 100 Km). A Node represents each campus, and Wi-Fi makes the connection inside each node. These nodes are connected via the Internet to the Main Control Unit (MCU) represented by Raspberry Pi connected to the cloud. Depending on the received data from the Nodes, the MCU will make the correct decision for each node using intelligent algorithms and the user's requirement. Then, control commands are initiated, and the node's appliances can be controlled automatically (or even manually) from the MCU.
State of Art Survey for Deep Learning Effects on Semantic Web Performance
IEEE Xplore
One of the more significant recent major progress in computer science is the coevolution of deep learning and the Semantic Web. This subject includes research from various perspectives, including using organized information inside the neural network training method or enriching these networks with ontological reasoning mechanisms. By bridging deep learning and the Semantic Web, it is possible to enhance the efficiency of neural networks and open up exciting possibilities in science. This paper presents a comprehensive study of the closest previous researches, which combine the role of Deep Learning and the performance of the Semantic web, which ties together the Semantic Web and deep learning science with their applications. The paper also explains the adoption of an intelligent system in Semantic Deep Learning (SemDeep). As significant results obtained from previous works addressed in this paper, it can be notified that they focussed on real-time detection of phishing websites by HTML Phish. Also, the DnCNN, led by ResNet, achieved the best results, Res-Unit, UNet, and Deeper SRCNN, which recorded 88.5% SSIM, 32.01 percent PSNR 3.90 percent NRMSE.
5G Mobile Communication System Performance Improvement with Caching: A Review
IEEE Xplore
Mobile core networks are facing exponential growth in traffic and computing demand as smart devices, and mobile applications become more popular. Caching is one of the most promising approaches to challenges and problems. Caching reduces the backhauling load in wireless networks by caching frequently used information at the destination node. Furthermore, proactive caching is an important technique to minimize the delay of storing planned content needs, relieving backhaul traffic and alleviating the delay caused by handovers. The paper investigates the caching types and compared caching techniques improvement with other methods used to improve 5G performance. The problems and solutions of caching in 5G networks are explored in this research. Caching research showed that the improvement with caching will depend on load, cache size, and the number of requested users who can get the required results by a proactive caching scheme. A significant decrease in traffic and total network latency can be achieved with caching.
Impact of Distributed-Memory Parallel Processing Approach on Performance Enhancing of Multicomputer-Multicore Systems
QALAAI ZANISTSCIENTIFIC JOURNAL (Issue: 4) (Volume: 6)
Distributed memory is a term used in computer science to describe a multiprocessor computer system in which each processor has its own private memory. Computational jobs can only work with local data, so if you need remote data, you'll have to communicate with one or more remote processors. Parallel and distributed computing are frequently used together. Distributed parallel computing employs many computing devices to process tasks in parallel, whereas parallel computing on a single computer uses multiple processors to execute tasks in parallel. Distributed systems are designed separately from the core network. There are different kinds of distributed systems such as peer-to-peer (P2P) networks, groups, grids, distributed storage systems. The multicore processor can be classified into two types: homogeneous and heterogeneous. This paper reviews the impact of the distributed-memory parallel processing approach on performance-enhancing of multicomputer-multicore systems. Also, number of methods have been introduced which used in distributed-memory systems and discuss which method is the best and enhance multicore performance in distributed systems. The best methods were those which used an operating system named gun/Linux 4.8.0-36, intel Xeon 2.5, python programming language.
Clustering Document based Semantic Similarity System using TFIDF and K-Mean
IEEE Xplore
The steady success of the Internet has led to an enormous rise in the volume of electronic text records. Sensitive tasks are increasingly being used to organize these materials in meaningful bundles. The standard clustering approach of documents was focused on statistical characteristics and clustering using the syntactic rather than semantic notion. This paper provides a new way to group documents based on textual similarities. Text synopses are found, identified, and stopped using the NLTK dictionary from Wikipedia and IMDB datasets. The next step is to build a vector space with TFIDF and cluster it using an algorithm K-mean. The results were obtained based on three proposed scenarios: 1) no treatment. 2) preprocessing without derivation, and 3) Derivative processing. The results showed that good similarity ratios were obtained for the internal evaluation when using (txt-sentoken data set) for all K values. In contrast, the best ratio was obtained with K = 20. In addition, as an external evaluation, purity measures were obtained and presented V measure of (txt). -sentoken) and the accuracy scale of (nltk-Reuter) gave the best results in three scenarios for K = 20 as subjective evaluation, the maximum time consumed with the first scenario (no preprocessing), and the minimum time recorded with the second scenario (excluding derivation).
Hybrid Client/Server Peer to Peer Multitier Video Streaming
IEEE Xplore
Real-Time Video Distribution (RTVD) is now the focus of many applications, including video conferencing, surveillance, video broadcasting, etc. This paper introduces a multi-IP camera-based method for distributing video signals to several levels using a hybrid client/server with peer-to-peer. There are four primary functions in the proposed structure: firstly, all linked camera transmissions will be received by the central server, and video signals will be shown to all attached clients and servers in Level two. The second function means that the clients/servers in level two start viewing received video signals and rebroadcasts them to the next level. The third function is clients/servers in level three who receive the video signals from the upper level and rebroadcast them to the fourth level. The fourth level consists of many clients that start viewing received video signals from level three. Therefore, the planned architecture and mechanism of the proposed system can provide the admin the capability to concentrate on the frames obtained from IP cameras to/from the central server. Furthermore, this mechanism can register and store the stream of frames and encode these frames during transmission. The proposed system depended on VC# programming language as a dependent tool that relied on various architectures and algorithms.
Web-Based Land Registration Management System: Iraq/Duhok Case Study
JOURNAL OF APPLIED SCIENCE AND TECHNOLOGY (Issue: 2) (Volume: 4)
In this era, technology is playing a central role in many areas of human life, but the classical hardcopy-based approaches are still being used for land registration. The Internet-based methods provide excellent facilities for overcoming the drawbacks of handwritten-based style and communication among different government sectors. Nowadays, Information and Communication Technology (ICT) is used to build professional electronic systems as big steps towards the electronic government (E-government) system. One of the most critical sections of the E-government is the E-Land-Registration (ELR). Duhok Land Directorate, together with its sub-directorates, works on a considerable amount of data to process. These directorates suffer from the classical hardcopy-based approaches, so building an ELR system will reduce time consumption and paper waste. The improvement of the land registration system will also allow integration with the E-government system. The progress of the land registration will enable communication between the land registration staff on one side and the administration and financial directorates on the other. In this thesis, an efficient ELR system for Duhok land registration is proposed. The services of the database management system cover Employee Registration Module, Estates Registration Module, Operation Type Module, Estate Owners Module, Estate Status Module, View Information Module, and Login Employee Module. HTML, CSS, PHP, MySQL, JavaScript, jQuery, Ajax, and Bootstrap tools were used for the design and implementation stages of the proposed ELR.
Predicting Football Outcomes by Using Poisson Model: Applied to Spanish Primera División
JOURNAL OF APPLIED SCIENCE AND TECHNOLOGY (Issue: 4) (Volume: 2)
During the past decades, sport, in general, has become one of the most powerful competitions and the most popular in the world. As well as, everyone is waiting for the winner, and who will be the champion in the end in different tournaments. Among these sports, football's popularity is more than all other sports. Football matches results predicting, as well as the champion in various competitions, has been seriously studied in recent years. Moreover, it has become an interesting field for many researchers. In this work, the Poisson model has been presented to predict the winner, draw, and loser from the football matches. The method is applied to the Spanish Primera División (First Division) in 2016-2017; the data has been downloaded from the football-data.co.uk website, which will be used to find the prediction accuracy.
Performance Monitoring for Processes and Threads Execution-Controlling
IEEE Xplore
Strong parallelism can minimize computation time whilst increasing the cost of synchronization. It's vital to keep track of how processes and threads are working. It is understood that thread-based systems improve the productivity of complex operations. Threading makes it easier to the main thread to load, thus enhancing system performance. This paper focuses on the development of a system that has two main stages: monitoring and controlling of a program which have ability to run on a number of multicore system architectures, including those with (2, 4, and 8) CPUs. The algorithms associated with this work are built to provide the ability of: providing dependent computer system information, status checking for all existing processes with their relevant information, and run all possible processes/threads cases that compose the user program that might include one of these cases (Single-Processes/Single-Threads, Single-Process/Multi-Thread, Multi-Process/single-Thread, Multi-Process/Multi-Thread and Multi-Process/ Single-Multi-Thread). The monitoring phase provides complete information on User Program (UP) with all its processes and threads, such as (Name, ID, Elapsed Time, Total CPU Time, CPU usage, User Time, Kernel Time, Priority, RAM size, allocated core, read bytes and read count). The controlling phase controls the processes and threads by suspending/resuming/killing them, modifying their priority, and forcing them to a particular core.
An Investigation on Neural Spike Sorting Algorithms
IEEE Xplore
Spike sorting is a technique used to detect signals generated by the neurons of the brain and to classify which spike belongs to which neurons. Spike sorting is one of the most important techniques used by the electrophysiological data processing. Spike Sorting Algorithms (SSA) are created to differentiate the behavior of one or more neurons from background electric noise using waveforms from one or more electrodes in the brain. This sorting comes out as having an essential role in extracting information from extracellular recordings in the neurosciences research community. There are many steps for Spike sorting algorithm (Detection, feature extraction, and Clustering). One of the most important things in spike sorting is the accuracy of the classification for neuron spikes. This article gives a brief overview of the spike sorting algorithm, and the contribution of this paper [email protected] a comprehensive overview of the previous works on the spike sorting [email protected] steps [email protected] (Detection, Feature extraction, and Clustering). The used new techniques to solve the problem of overlapping. On the other hand, previous works used real-time or online spike sorting instead of offline spike sorting. The previous researchers used machine learning algorithms for automatic classification for the spike sorting.
Massive MIMO-OFDM Performance Enhancement on 5G
IEEE Xplore
Multi Input Multi Output MIMO and Orthogonal Frequency Division Multiplexing OFDM based communication strategies have gained popularity in the industry and research fields due to the rising demand for wireless cellular communication. Massive MIMO-OFDM-based cellular communication has recently become popular in a variety of realtime applications. However, as the need for high-quality communication grew, so did the number of users, posing various problems. In this article, we presented a comprehensive review study about Massive MIMO-OFDM based communication techniques and development. We mainly focus on essential parameters of Massive MIMO-OFDM, which play an essential role in 5G communication such as PAPR, precoding, channel estimation, and error-correcting codes. The paper shows results on the energy efficiency for a wireless (MIMO) link operating at millimeter-wave frequencies (mmWave) in a typical 5G scenario, showing the impact of the above essential parameters on the 5G performance. This comprehensive review and comparison will research 5G development and applications to adopt the proper techniques to achieve their demand.
Distributed and Parallel Computing System Using Single-Client Multi-Hash Multi-Server Multi-Thread
IEEE Xplore
There are relatively new approaches of Parallel Distributed Processing. Distributed cloud uses cloud computing technology from different geographic locations to interconnect data and applications served. Distributed, in the sense of information technology (IT), something is exchanged between various systems that may also be in different places. The amount of data and exhausting time to process and monitor the predicted outcomes effectively and with as little time as possible has been significantly increased.This paper proposed a system designed to help users with the lowest processing time interactively solve composite work. The two most popular technologies, Distributed Parallel Processing (DPP) and Cloud Computing (CC) rely on the clarity of solving the user's problem in this method. The proposed system has been built depending on different types of (Clients, Hash-codes, Servers, and server logical processors) with a cloud system (webserver). The client-side produces and sends the Hash-codes to the cloud-side. The web server distributes them among the chosen servers in charge of executing the cracking operation. When using light load (single hash-code) with multi-servers and multi-processors, it has been verified getting the lowest time consumed (i.e., Kernel, User, and Total time) proposed system offers better efficiency. Although it is shown that using heavy load (multi hash-codes) with multi-servers and multi-processors, the proposed system provides better performance from the point of view of the parallel processing technique. Because of this hash-codes cracking effects by two critical parameters (minimum cracking time and economical usage of machine-resources), these instances were taken into account by the proposed system.
Embedded System for Eye Blink Detection Using Machine Learning Technique
IEEE Xplore
Nowadays, eye tracking and blink detection are increasingly popular among researchers and have the potential to become a more important component of future perceptual user interfaces. The real-time eye-tracking system has been a fundamental and challenging problem for machine learning problems. The main purpose of this paper is to propose a new method to design an embedded eye blink detection system that can be used for various applications with the lowest cost. This study presents an efficient technique to determine the level of eyes that are closed and opened. We offered a real-time blink detection method by using machine learning and computer vision libraries. The proposed method consists of four phases: (1) taking frame by employing a raspberry pi camera that slotted to the raspberry pi 3 platform, (2) utilizing haar cascade algorithm to identify faces in the captured frames, (3) find facial landmarks by utilizing facial landmark detector algorithm, (4) detect the eyes' region and calculate the eye aspect ratio. The proposed method obtained a high accuracy to indicate eye closing or opening. In this study, an aspect ratio method was used to implement a robust and low-cost embedded eye blink detection system on the raspberry pi platform. This method is exact resourceful, fast, and easy to perform eye blink detection.
Machine Learning-based Diabetic Retinopathy Early Detection and Classification Systems- A Survey
IEEE Xplore
Diabetes Mellitus is a chronic disease that spreads quickly worldwide. It results from increasing the blood glucose level and causes complications in the heart, kidney, and eyes. Diabetic Retinopathy (DR) is an eye disease that refers to the bursting of blood vessels in the retina as Diabetes exacerbates. It is considered the main reason for blindness because it appears without showing any symptoms in the primitive stages. Earlier detection and classification of DR cases is a crucial step toward providing the necessary medical treatment. Recently, machine learning plays an efficient role in medical applications and computer-aided diagnosis due to the accelerated development in its algorithms. In this paper, we aim to study the performance of various machine learning algorithms-based DR detection and classification systems. These systems are trained and tested using massive amounts of retina fundus and thermal images from various publicly available datasets. These systems proved their success in tracking down the warning signs and identifying the DR severity level. The reviewed systems' results indicate that ResNet50 deep convolutional neural network was the most effective algorithm for performance metrics. The Resnet50 contains a set of feature extraction kernels that can analyze retina images to extract wealth information. We conclude that machine learning algorithms can support the physician in adopting appropriate diagnoses and treating DR cases.
Study for Food Recognition System Using Deep Learning
Journal of Physics: Conference Series (Volume: 1963)
Accurate dietary appraisal has been found by literature to be very significant in the evaluation of weight loss treatments. Most current methods of dietary evaluation, however, depend on recollection. The development of a modern computer-based food recognition system for reliable food evaluation is now possible across comprehensive mobile devices as well as rich Cloud services. Fixing the problem of food detection and identification in photos of different kinds of foods. Given the variety of food products with low inter-and high intra-class variations and the limited information in a single picture, the problem is complicated. By propose the overall application of multiple fusion-trained classifiers to achieve increased identification and recognition capabilities on characteristics obtained from various deep models. This paper studied various techniques of food recognition using different approaches and based on several variables, compared their effectiveness. Our study results demonstrate that deep learning overcomes other strategies like manual feature extractors, standard ML algorithms, as well as DL as a practical tool for food hygiene and safety inspections.
Real-life Dynamic Facial Expression Recognition: A Review
Journal of Physics: Conference Series (Volume: 1963)
In emotion studies, critiques of the use of a static facial expression have been directed to its resulting from poor ecological validity. We conducted a study of studies in the present work, which specifically contrasted recognizing emotions using dynamic facial expressions. Brain imaging experiments and behavioural studies with associated physiological research are also included. The facial motion appears to be connected to our emotional process. The findings of laboratory brain injury experiments also reinforce the concept of a neurological dissociation between static and dynamic expression mechanisms. According to the findings of electromyography studies of dynamic expressions of affective signals, those expressions evoke more extreme facial mimic physiological responses. Studies significantly affirm the essence of dynamic facial gestures.
A survey on Security and Privacy Challenges in Smarthome based IoT
international journal of contemporary architecture the new arch (Issue: 8) (Volume: 2)
The innovative and disruptive technology of the Internet of Things (IoT)-based smart home applications are mainly restricted and dispersed.To offer insightful analyses of technological settings and assist researchers, we must first grasp the available choices and gaps in this field of study.Thus, this study aims to review to organize the research landscape into a cohesive taxonomy.We perform a targeted search for all articles relating to the security of IoT-enabled smarthome systems.These articles compile literature on IoT-enabled smart home applications.The final dataset generated by the classification method consists of 111 articles classified into nine categories.The nine classes include architecture development for IoT security, SH-IoT security analysis, smart security with IoT, IoT network security, surveys and reviews, machine learning and IoT security, cryptography, utilizing Blockchain to improve IoT security and Authentication & Authorization in IoT Systems.We then define the fundamental features of this developing area in terms of the following: the reason for using IoT in smart home applications, the major obstacles impeding adoption, and suggestions for enhancing the acceptability and usage of smart house apps in the literature.
A State of the Art Survey of Machine Learning Algorithms for IoT Security
Asian Journal of Research in Computer Science (Issue: 4) (Volume: 9)
The Internet of Things (IoT) is a paradigm shift that enables billions of devices to connect to the Internet. The IoT's diverse application domains, including smart cities, smart homes, and e-health, have created new challenges, chief among them security threats. To accommodate the current networking model, traditional security measures such as firewalls and Intrusion Detection Systems (IDS) must be modified. Additionally, the Internet of Things and Cloud Computing complement one another, frequently used interchangeably when discussing technical services and collaborating to provide a more comprehensive IoT service. In this review, we focus on recent Machine Learning (ML) and Deep Learning (DL) algorithms proposed in IoT security, which can be used to address various security issues. This paper systematically reviews the architecture of IoT applications, the security aspect of IoT, service models of cloud computing, and cloud deployment models. Finally, we discuss the latest ML and DL strategies for solving various security issues in IoT networks.
A Survey on the Role of Artificial Intelligence, Machine Learning and Deep Learning for Cybersecurity Attack Detection
IEEE Xplore
With the growing internet services, cybersecurity becomes one of the major research problems of the modern digital era. Cybersecurity involves techniques to protect and control the systems, hardware, software, network, and electronic data from unauthorized access. It is necessary to build a cyber-security system to detect different types of attacks. Implementing various intelligent algorithms in cybersecurity led to detect and analyz attack actions occurring in field of computer networks. Cybersecurity uses artificial intelligence, machine learning, and deep learning algorithms capable of extracting optimal feature representation from the big data set. This has been applied to various cybersecurity cases, such as attacks detection, prediction, and analysis. This work aims to perform an analysis of cybersecurity attacks datasets by using intelligent approaches. It also provides a detailed comparison with the performance of algorithms, field implementation to describe network protection optimization technologies benefits.
Design & Analyses of a Novel Real Time Kurdish Sign Language for Kurdish Text and Sound Translation System
IEEE Xplore
The most used code for communication by deaf and dumb people is Sign Language, which involves several gestures where each has a particular meaning. Many researchers and commercial firms seek to help the lifestyle of science in various areas of life for deaf people. Except for necessary steps dealing with only ten integer numeric values or essential Kurdish alphabet representation, there is no adequate Kurdish sign language. This research aims to study and examine a new system to be developed based on the concepts of a system of sign language for Kurdish writings and spoken translation. The proposed approach deems to be acting to interpret signs for a better life for the deaf and the public. The framework proposed falls into three phases: video processing, creation of patterns, and discrimination, and translation of the text. This technique depends on a translator's structure trained in the Kurdish sign language, mainly in the standard Kurdish sign language, and the domain expert's signature gesture.
Deep Learning Approaches for Intrusion Detection
Asian Journal of Research in Computer Science (Issue: 4) (Volume: 9)
Recently, computer networks faced a big challenge, which is that various malicious attacks are growing daily. Intrusion detection is one of the leading research problems in network and computer security. This paper investigates and presents Deep Learning (DL) techniques for improving the Intrusion Detection System (IDS). Moreover, it provides a detailed comparison with evaluating performance, deep learning algorithms for detecting attacks, feature learning, and datasets used to identify the advantages of employing in enhancing network intrusion detection.
Conference
2021 International Conference on Advanced Computer Applications, (ACA2021)
Iraq, Maysan As Presenter
The steady success of the Internet has led to an enormous rise in the volume of electronic text records. Sensitive tasks are increasingly being used to organize these materials in meaningful bundles. The standard clustering approach of documents was focused on statistical characteristics and clustering using the syntactic rather than semantic notion. This paper provides a new way to group documents based on textual similarities. Text synopses are found, identified, and stopped using the NLTK dictionary from Wikipedia and IMDB datasets. The next step is to build a vector space with TFIDF and cluster it using an algorithm K-mean. The results were obtained based on three proposed scenarios: 1) no treatment. 2) preprocessing without derivation, and 3) Derivative processing. The results showed that good similarity ratios were obtained for the internal evaluation when using (txt-sentoken data set) for all K values. In contrast, the best ratio was obtained with K = 20. In addition, as an external evaluation, purity measures were obtained and presented V measure of (txt). -sentoken) and the accuracy scale of (nltk-Reuter) gave the best results in three scenarios for K = 20 as subjective evaluation, the maximum time consumed with the first scenario (no preprocessing), and the minimum time recorded with the second scenario (excluding derivation).
2021 International Conference on Advanced Computer Applications (ACA
Iraq, Maysan As Presenter
Real-Time Video Distribution (RTVD) is now the focus of many applications, including video conferencing, surveillance, video broadcasting, etc. This paper introduces a multi-IP camera-based method for distributing video signals to several levels using a hybrid client/server with peer-to-peer. There are four primary functions in the proposed structure: firstly, all linked camera transmissions will be received by the central server, and video signals will be shown to all attached clients and servers in Level two. The second function means that the clients/servers in level two start viewing received video signals and rebroadcasts them to the next level. The third function is clients/servers in level three who receive the video signals from the upper level and rebroadcast them to the fourth level. The fourth level consists of many clients that start viewing received video signals from level three. Therefore, the planned architecture and mechanism of the proposed system can provide the admin the capability to concentrate on the frames obtained from IP cameras to/from the central server. Furthermore, this mechanism can register and store the stream of frames and encode these frames during transmission. The proposed system depended on VC# programming language as a dependent tool that relied on various architectures and algorithms.
2021 2nd Information Technology To Enhance e-learning and Other Application (IT-ELA)
Iraq, Baghdad As Presenter
The popularity of mobile applications is rapidly increasing in the age of smartphones and tablets. Communication, social media, news, sending emails, buying, paying, viewing movies and streams, and playing games are just a few of the many uses for them. Android is currently the most popular mobile operating system on the planet. The android platform controls the mobile operating system market, and the number of Android Mobile applications grows day by day. At the same time, the number of attacks is also increasing. The attackers take advantage of vulnerable mobile applications to execute malicious code, which could be harmful and access sensitive private data. Security and privacy of data are critical and must be prioritized in mobile application development. To cope with the security threats, mobile application developers must understand the various types of vulnerabilities and prevention methods. Open Web Application Security Project (OWASP) lists the top 10 mobile applications security risks and vulnerabilities. Therefore, this paper investigates mobile applications vulnerabilities and their solutions.
2021 14th International Conference on Developments in eSystems Engineering (DeSE)
United Arab Emirates, Sharjah As Presenter
The biological human brain model was used to inspire the idea of Artificial Neural Networks (ANNs). The notion is then converted into a mathematical formulation and then into machine learning, which is utilized to address various issues throughout the world. Moreover, ANNs has achieved advances in solving numerous intractable problems in several fields in recent times. However, its success depends on the hyper-parameters it selects, and manually fine-tuning them is a time-consuming task. Therefore, automation of the design or topology of artificial neural networks has become a hot issue in both academic and industrial studies. Among the numerous optimization approaches, evolutionary algorithms (EAs) are commonly used to optimize the architecture and parameters of ANNs. We review several successful, well-designed strategies to using EAs to develop artificial neural network architecture that has been published in the last four years in this paper. In addition, we conducted a thorough study and analysis of each publication. Furthermore, details such as methods used, datasets, computer resources, training duration, and performance are summarized for each study. Despite this, the automated neural network techniques performed admirably. However, the long training period and huge computer resources remain issues for these sorts of ANNs techniques.
2021 International Conference of Modern Trends in Information and Communication Technology Industry (MTICTI)
Yemen, Sana'a As Presenter
Mobile core networks are facing exponential growth in traffic and computing demand as smart devices, and mobile applications become more popular. Caching is one of the most promising approaches to challenges and problems. Caching reduces the backhauling load in wireless networks by caching frequently used information at the destination node. Furthermore, proactive caching is an important technique to minimize the delay of storing planned content needs, relieving backhaul traffic and alleviating the delay caused by handovers. The paper investigates the caching types and compared caching techniques improvement with other methods used to improve 5G performance. The problems and solutions of caching in 5G networks are explored in this research. Caching research showed that the improvement with caching will depend on load, cache size, and the number of requested users who can get the required results by a proactive caching scheme. A significant decrease in traffic and total network latency can be achieved with caching.
2021 International Conference on Advance of Sustainable Engineering and its Application (ICASEA)
Iraq, Wasit As Presenter
Parallel Distributed Processing is a relatively new method. Distributed cloud connects data and applications delivered by utilizing cloud computing technologies from several geographical locations. When something is distributed in IT, it is transferred among multiple systems located across diverse areas. The expanse of information besides the interval it takes towards analyzing adding to monitoring projected results efficiently and effectively has been dramatically enhanced. This paper presented a system to assist users in doing composite tasks interactively with the least processing time. Distributed-Parallel-Processing (DPP) and CloudComputing (CC) are the two most great technologies, can process and answer the user problem quickly. Developing a suggested system used several sources (source generators, under-test load, computing machines, and processing units) and (webservers) through the cloud. Hash codes are generated on the client-side and sent towards the webserver. The webserver delivers these does to cracking servers that have been specified. It has been verified that while employing light load (single hash-code) with multi-servers and multiprocessors, the suggested system gives improved efficiency (in terms of Kernel-burst, User-bust, and Total-execution) timings. Although it has been demonstrated that employing large loads (many under-testing codes) with many computing machines using multiprocessors improves the system’s performance. The suggested computational system outperforms it in terms of parallel processing. The proposed method took these situations into account due to the code-breaking influences by two precarious criteria (minimal breaking-time besides cost-effective usage of computer resources).
2021 International Conference on Software, Telecommunications and Computer Networks (SoftCOM)
Croatia, Hvar As Presenter
Multi Input Multi Output MIMO and Orthogonal Frequency Division Multiplexing OFDM based communication strategies have gained popularity in the industry and research fields due to the rising demand for wireless cellular communication. Massive MIMO-OFDM-based cellular communication has recently become popular in a variety of realtime applications. However, as the need for high-quality communication grew, so did the number of users, posing various problems. In this article, we presented a comprehensive review study about Massive MIMO-OFDM based communication techniques and development. We mainly focus on essential parameters of Massive MIMO-OFDM, which play an essential role in 5G communication such as PAPR, precoding, channel estimation, and error-correcting codes. The paper shows results on the energy efficiency for a wireless (MIMO) link operating at millimeter-wave frequencies (mmWave) in a typical 5G scenario, showing the impact of the above essential parameters on the 5G performance. This comprehensive review and comparison will research 5G development and applications to adopt the proper techniques to achieve their demand.
2021 4th International Iraqi Conference on Engineering Technology and Their Applications (IICETA)
Iraq, Najaf As Presenter
The objective of this study is to evaluate cluster-based performance under ultra-dense HTTP traffic. This paper provides a performance analysis of the existing load balancing algorithms (Round Robin, Least Connection, and IP-Hash/Source) in cluster-based web servers. The performance testing process is operated with Apache-Jmeter 5.1 and distributing technique to realize ultra-dense load (100000-500000 HTTP requests) in a real network. Generally, the results indicated the proposed Nginx-based cluster is more responsive, stable, and consumes fewer resources concerning Response Time, Throughput, Standard Deviation, and CPU Usage measurements. While in terms of Error Rate, the Apache-based cluster is more efficient. Moreover, with the Nginx-based cluster, the Round Robin algorithm provided slightly better performance. In contrast, the IP-Hash algorithm outperformed the other two algorithms for the Apache-based cluster in terms of all the utilized metrics.
2021 4th International Iraqi Conference on Engineering Technology and Their Applications (IICETA)
Iraq, Najaf As Presenter
Dew Computing (DC) is a comparatively modern field with a wide range of applications. By examining how technological advances such as fog, edge and Dew computing, and distributed intelligence force us to reconsider traditional Cloud Computing (CC) to serve the Internet of Things. A new dew estimation theory is presented in this article. The revised definition is as follows: DC is a software and hardware cloud-based company. On-premises servers provide autonomy and collaborate with cloud networks. Dew Calculation aims to enhance the capabilities of on-premises and cloud-based applications. These categories can result in the development of new applications. In the world, there has been rapid growth in Information and Communication Technology (ICT), starting with Grid Computing (GC), CC, Fog Computing (FC), and the latest Edge Computing (EC) technology. DC technologies, infrastructure, and applications are described. We’ll go through the newest developments in fog networking, QoE, cloud at the edge, platforms, security, and privacy. The dew-cloud architecture is an option concerning the current client-server architecture, where two servers are located at opposite ends. In the absence of an Internet connection, a dew server helps users browse and track their details. Data are primarily stored as a local copy on the dew server that starts the Internet and is synchronized with the cloud master copy. The local dew pages, a local online version of the current website, can be browsed, read, written, or added to the users. Mapping between different Local Dew sites has been made possible using the dew domain name scheme and dew domain redirection.
2021 4th International Iraqi Conference on Engineering Technology and Their Applications (IICETA)
Iraq, Najaf As Presenter
The continuing success of the Internet has led to an enormous rise in the volume of electronic text records. The strategies for grouping these records into coherent groups are increasingly important. Traditional text clustering methods are focused on statistical characteristics, with a syntactic rather than semantical concept used to do clustering. A new approach for collecting documentation based on textual similarities is presented in this paper. The method is accomplished by defining, tokenizing, and stopping text synopses from Wikipedia and IMDB datasets using the NLTK dictionary. Then, a vector space is created using TFIDF with the K-mean algorithm to carry out clustering. The results were shown as an interactive website.
2021 7th International Conference on Contemporary Information Technology and Mathematics (ICCITM)
Iraq, Mosul As Presenter
One of the more significant recent major progress in computer science is the coevolution of deep learning and the Semantic Web. This subject includes research from various perspectives, including using organized information inside the neural network training method or enriching these networks with ontological reasoning mechanisms. By bridging deep learning and the Semantic Web, it is possible to enhance the efficiency of neural networks and open up exciting possibilities in science. This paper presents a comprehensive study of the closest previous researches, which combine the role of Deep Learning and the performance of the Semantic web, which ties together the Semantic Web and deep learning science with their applications. The paper also explains the adoption of an intelligent system in Semantic Deep Learning (SemDeep). As significant results obtained from previous works addressed in this paper, it can be notified that they focussed on real-time detection of phishing websites by HTML Phish. Also, the DnCNN, led by ResNet, achieved the best results, Res-Unit, UNet, and Deeper SRCNN, which recorded 88.5% SSIM, 32.01 percent PSNR 3.90 percent NRMSE.
2021 7th International Conference on Contemporary Information Technology and Mathematics (ICCITM)
Iraq, Mosul As Presenter
Population growth and the creation of new equipment are accompanied by a constant increase in energy use each day and have created significant consumer issues in energy management. Smart meters (SM) are simply instruments for measuring energy usage and are a significant resource of the evolving technological energy management system. Including precise billing data, information on usage at the user end, installation of two-way communication. SM is the critical component of an intelligent power grid. The Internet of Things (IoT) is a critical partner in the power business leading to intelligent resource management to ensure successful data collection and use. This paper proposes designing and analyzing intelligent energy management systems based on Multi-Agent (MA) and Distributed IoT (DIoT). An efficient approach is proposed to monitor and control power consumption levels of the proposed case study of Duhok Polytechnic University (DPU). DPU consists of Presidency, six colleges, and eight institutes. These fifteen campuses are distributed through a wide geographical area with long distances between each campus (i.e., more than 100 Km). A Node represents each campus, and Wi-Fi makes the connection inside each node. These nodes are connected via the Internet to the Main Control Unit (MCU) represented by Raspberry Pi connected to the cloud. Depending on the received data from the Nodes, the MCU will make the correct decision for each node using intelligent algorithms and the user's requirement. Then, control commands are initiated, and the node's appliances can be controlled automatically (or even manually) from the MCU.
2021 International Conference on Communication & Information Technology (ICICT2021)
Iraq, Basrah As Presenter
Strong parallelism can minimize computation time whilst increasing the cost of synchronization. It's vital to keep track of how processes and threads are working. It is understood that thread-based systems improve the productivity of complex operations. Threading makes it easier to the main thread to load, thus enhancing system performance. This paper focuses on the development of a system that has two main stages: monitoring and controlling of a program which have ability to run on a number of multicore system architectures, including those with (2, 4, and 8) CPUs. The algorithms associated with this work are built to provide the ability of: providing dependent computer system information, status checking for all existing processes with their relevant information, and run all possible processes/threads cases that compose the user program that might include one of these cases (Single-Processes/Single-Threads, Single-Process/Multi-Thread, Multi-Process/single-Thread, Multi-Process/Multi-Thread and Multi-Process/ Single-Multi-Thread). The monitoring phase provides complete information on User Program (UP) with all its processes and threads, such as (Name, ID, Elapsed Time, Total CPU Time, CPU usage, User Time, Kernel Time, Priority, RAM size, allocated core, read bytes and read count). The controlling phase controls the processes and threads by suspending/resuming/killing them, modifying their priority, and forcing them to a particular core.
2021 International Conference on Communication & Information Technology (ICICT)
Iraq, Basrah As Presenter
Spike sorting is a technique used to detect signals generated by the neurons of the brain and to classify which spike belongs to which neurons. Spike sorting is one of the most important techniques used by the electrophysiological data processing. Spike Sorting Algorithms (SSA) are created to differentiate the behavior of one or more neurons from background electric noise using waveforms from one or more electrodes in the brain. This sorting comes out as having an essential role in extracting information from extracellular recordings in the neurosciences research community. There are many steps for Spike sorting algorithm (Detection, feature extraction, and Clustering). One of the most important things in spike sorting is the accuracy of the classification for neuron spikes. This article gives a brief overview of the spike sorting algorithm, and the contribution of this paper [email protected] a comprehensive overview of the previous works on the spike sorting [email protected] steps [email protected] (Detection, Feature extraction, and Clustering). The used new techniques to solve the problem of overlapping. On the other hand, previous works used real-time or online spike sorting instead of offline spike sorting. The previous researchers used machine learning algorithms for automatic classification for the spike sorting.
Conference Chair of IEEE Conference
iraq, Duhok As Presenter
International Conference on Advanced Science and Engineering (ICOASE2019), at DPU and UoZ 2-4 October 2019.
Conference Chair of IEEE Conference
iraq, Duhok As Presenter
International Conference on Advanced Science and Engineering (ICOASE2018)”, at DPU and UoZ 9-11 October 2018.
Seminar
العلاقة بين نضج أدارة المعرفة والتفوق التنظيمي
DPU Presidency, Duhok, Iraq As Attend
Entrepreneurship training course
DPU Presidency, Duhok, Iraq As Attend
Matrix
DPU Presidency, Duhok, Iraq As Attend
Care For The Terminal Ill
DPU Presidency, Duhok, Iraq As Attend
New Hashing Algorithm and an Authentication Technique to Improve IoT Security
IEC-2022 (IEEE), Erbil, Iraq As Presenter
Training Course
PhD
Duhok, National
(Regular System), “IT Dept.”, Technical College of Informatics Akre, Duhok Polytechnic University (DPU)
PhD
Duhok, National
(Regular System), “IT Dept.”, Technical College of Informatics Akre, Duhok Polytechnic University (DPU)
PhD
Slimene, National
(Regular System), “IT Dept.”, Technical College of Informatics, Sulaimani Polytechnic University (SPU)
M.Sc
Hawaler, National
“Information Software Engineering Dept.”, Erbil Technical College, Erbil Polytechnic University (EPU)
M.Sc
Duhok, National
“Different Departments”, (Education, Medicines, Agricultural) Colleges, University of Duhok
Postgraduate Committee
Impact of Distributed-Memory Parallel Processing Approach on Performance Enhancing of Multicomputer-Multicore Systems
MSc Degree, As Supervisor
CSAERNet: An Efficient Deep Learning Architecture for Image Classification
MSc Degree, As Supervisor
Prototype Design of a COVID-19 Aware Localization System
MSc Degree, As Member
Energy Efficiency in Wireless Body Area Network
MSc Degree, As Member
MODEL-Based Performance Quality Assessment for IoT Applications
Ph.D. Degree, As Member
Improving IoT Security
Ph.D. Degree, As Supervisor
Fault Diagnosis of a Robot Arm Based on Wavelet/Slantlet Filters and Intelligent Strategies
Ph.D. Degree, As Member