F1 classification
Author: p | 2025-04-24
The F1 score is commonly used to measure performance of binary classification, but extensions to multi-class classifications exist. Why is the F1 Score Important? The F1 score is a popular This tutorial provides step-by-step discussion on - How to Compute Average F1, Macro F1 and Micro F1 for Multi-Class Classification
Occupancy classification F1 or S1? or other? - The
A classification report is a performance evaluation metric in machine learning. It is used to show the precision, recall, F1 Score, and support of your trained classification model. If you have never used it before to evaluate the performance of your model then this article is for you. In this article, I will take you through an introduction to the classification report in machine learning and its implementation using Python.It is one of the performance evaluation metrics of a classification-based machine learning model. It displays your model’s precision, recall, F1 score and support. It provides a better understanding of the overall performance of our trained model. To understand the classification report of a machine learning model, you need to know all of the metrics displayed in the report. For a clear understanding, I have explained all of the metrics below so that you can easily understand the classification report of your machine learning model:MetricsDefinitionPrecision Precision is defined as the ratio of true positives to the sum of true and false positives. Recall Recall is defined as the ratio of true positives to the sum of true positives and false negatives. F1 Score The F1 is the weighted harmonic mean of precision and recall. The closer the value of the F1 score is to 1.0, the better the expected performance of the model is. Support Support is the number of actual occurrences of the class in the dataset. It doesn’t vary between models, it just diagnoses the performance evaluation process. Hope you now understand what a classification report is in machine learning. Now in the section below, I will walk you through its implementation using Python.Classification Report using PythonTo view the classification report of a machine learning model, we must first train a machine learning model. In the code below, I first trained a very simple machine learning model to classify spam messages and to evaluate its performance I have used a classification report using Python: precision recall f1-score support ham 0.99 0.99 0.99 1587 spam 0.93 0.92 0.92 252 accuracy 0.98 1839 macro avg 0.96 0.95 0.96 1839weighted avg 0.98
The Importance of F1 Score in Classification Metrics
1. Lewis Hamilton 2. Sebastian Vettel 3.Valtteri Bottas GP CANADA F1/2016Event: Canadian Grand PrixTrack: Circuit Gilles VilleneuveWeather: 13°C Dry & CloudedTarmac: 27°C DryHumidity: 55%Lewis Hamilton won his 44th Formula 1 Grand Prix in Canada today.Ferrari driver Sebastian Vettel finished 2nd and scored his 83rd podium.He could have won the race if he did the same tyre strategy as Hamilton.Williams driver Valtteri Bottas was able to steel 3rd place from Max Verstappen who was driving an epic race by holding off Nico Rosberg in his much faster Mercedes for 4th place.Read more in our 2016 Canadian F1 GP report.See all details about F1 statistics in current 2016 F1 championship standings and our Top 100 all time driver rankings list.F1 classification 2016 Canadian GPPNoDriverTeamTimeLapsGridPts144 Lewis Hamilton Mercedes01:31:05.2967012525 Sebastian Vettel Ferrari01:31:10.30770318377 Valtteri Bottas Williams01:31:51.71870715433 Max Verstappen Red Bull01:31:58.3167051256 Nico Rosberg Mercedes01:32:07.3897021067 Kimi Räikkönen Ferrari01:32:08.313706873 Daniel Ricciardo Red Bull01:32:08.9307046827 Nico Hülkenberg Force India01:31:19.0966994955 Carlos Sainz Toro Rosso01:31:24.787691521011 Sergio Pérez Force India01:31:27.387691111114 Fernando Alonso McLaren01:31:54.889691001226 Daniil Kvyat Red Bull01:32:00.761691601321 Esteban Gutierrez Haas01:31:05.78468130148 Romain Grosjean Haas01:31:06.35668140159 Marcus Ericsson Sauber01:31:14.346682101620 Kevin Magnussen Renault01:31:28.002682201794 Pascal Wehrlein Manor01:31:35.776681801812 Felipe Nasr Sauber01:31:48.091681901988 Rio Haryanto Manor01:32:19.87368200DNF19 Felipe Massa WilliamsOverheating3580DNF30 Jolyon Palmer RenaultWater leak16170DNF22 Jenson Button McLarenEngine9120✅ Check out 2016 F1 Championship Standings✅ Check out 2016 F1 Teams & Drivers✅ Check out All Time F1 Drivers Rankings✅ Check out All Time F1 Driver Records✅ Check out All Time F1 Teams RankingsPlease share this on social media:✅ Check out more posts with related topics:2016 F1 seasonCanadian GPF1 ClassificationsRacing ResultsAbout: F1 (classification) - DBpedia Association
Firework Categories, Classifications & Safety DistancesConsumer firework categories and their related safety distances explained.Table of contents:Key informationConsumer fireworks are categorised as Category F1, F2 or F3.Category F1 fireworks are indoor or close proximity fireworks with minimal safety distances (e.g. 1m).Category F2 fireworks are outdoor fireworks with spectator safety distances of at least 8m.Category F3 fireworks are outdoor fireworks with spectator safety distances of at least 25m.These are spectator distances; distances to firers and structures will differ (see below).Category F4 fireworks are for professional use only.Category F2 and F3 fireworksStarting off with the most common type of firework you’ll be using as a consumer, Category F2 and Category F3 fireworks. These cover just about every type of firework from rockets through to cakes, barrages, fountains and so on.All fireworks on sale to the public have to be extensively tested to CE standards and classified as either Category F2 or F3. Within each type of firework, different criteria determine whether the firework has an F2 or an F3 category. It’s beyond the scope of this article to delve into the technicalities too deeply. But to give an example, cakes with up to 500g of gunpowder in them are Category F2 whereas those with over 500g are Category F3.So in general terms, the more powerful a firework, the more likely it is to be Category F3 and require a greater safety distance.Some restrictions under CE apply to all fireworks, for example an upper noise limit of 120dB. And the UK in addition bans certain types of firework including bangers and mini-rockets even though they might be legal under CE in other European countries.Each firework will clearly show its category on the warning label:Category F2 label.Category F2 fireworks require the smallest safety distance to spectators, typically 8m or 15m. In actual fact the F2 classification requires a minimum distance of 8m but manufacturers are free to increase this to any distance, so it’s not uncommon to see even 20m stated on F2 fireworks.Category F3 fireworks are more powerful and require a spectator distance of at least 25m. "Retire immediately" distanceYou will also see on. The F1 score is commonly used to measure performance of binary classification, but extensions to multi-class classifications exist. Why is the F1 Score Important? The F1 score is a popularF1 Classifications of all Formula 1 GP races - F1-Fansite.com
Is generally accepted that CE fireworks offer better performance at their respective viewing distances and the return to more powerful fireworks that successive BS iterations had watered down.Although CE is an EU-wide classification, the UK government has still insisted that various fireworks that our European friends enjoy are still banned in the UK. That includes aerial shells, bangers, screech rockets and airbombs.At the time of writing (2023) it is still unclear what the implications of Brexit are on all of this. Members of the fireworks industry have indicated that they will be working towards a replacement in time, but until then, CE will remain in place. I’ll update this article if I learn of any news.An older BS fireworks label. Now illegal to be sold by any retailer.Category F1 fireworksThis classification is given to very small fireworks requiring a minimal safety distance (often given as 1m). Examples include some types of indoor firework and novelty items.A Category F1 warning label. Note the 1m safety distance.Category F4 fireworksFireworks for professional-only use are given the F4 category. These are not available to members of the public and are for trained display operators only. Most Category F4 fireworks do not have an explicit safety distance since it is down to the display operator to correctly set up and use them; many do not even have a delay fuse as they are intended to be electrically fired. Often called “industrial fireworks” by the press, Category F4 fireworks would clearly be very dangerous in untrained hands.Contrary to popular myth there is no such thing as a “licence” you can buy or train for that allows you, as a member of the public, to buy or use Category F4 fireworks. You can read more about this in my aerial shells and Category F4 fireworks article.1.3G, 1.4G, HT3 and HT4Further complicating matters is 1.3G and 1.4G. This is a classification given to fireworks that relates to their potential hazard and this is shown in a big orange diamond on the side of the firework’s outer carton. This relates to transportation and packaging, with 1.3G being “more hazardous”Classification Accuracy, Precision, Recall and F1-Score
(Figure courtesy of Devyn Kelly). Figure 3. An illustration of landmarking key points for cats and dogs. (Figure courtesy of Devyn Kelly). Figure 4. An example illustration of the face detection, cropping, and alignment of the model. (Figure courtesy of J. Marie Brown). Figure 4. An example illustration of the face detection, cropping, and alignment of the model. (Figure courtesy of J. Marie Brown). Table 1. Confusion Matrix of the animal species classification layer. Table 1. Confusion Matrix of the animal species classification layer. PredictedDogCatActualDog267958Cat402634 Table 2. Performance measurements for training and testing in the species classification layer. Table 2. Performance measurements for training and testing in the species classification layer. MetricResultTraining accuracy on COCO dataset85.92%Testing accuracy on Kaggle dataset98.18%Testing precision on Kaggle dataset97.88%Testing recall on Kaggle dataset98.52%F1 score on Kaggle dataset98.19% Table 3. Accuracy report on face and body identification for top 10 recommendations. Table 3. Accuracy report on face and body identification for top 10 recommendations. Nth RecommendationFace Identification AccuracyBody Identification AccuracyFace and Body Identification Accuracy180%81%86.5%214%15%10.5%33%2%2%4 to 103%2%1% Table 4. Performance comparison. Table 4. Performance comparison. MetricIdentification Accuracy Using FaceOverall Accuracy(Soft Biometrics)StudyMethod by Kenneth et al. [14]78.09%80%Our method84.94%92% Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (The Importance of F1 Score in Text Classification Problems
Data science is an interdisciplinary field that combines various techniques, methods, and tools to extract valuable insights and knowledge from data. It involves the application of scientific methodologies, algorithms, and statistical analysis to uncover patterns, trends, and relationships within large and complex datasets. Data science plays a crucial role in understanding, interpreting, and making informed decisions based on data-driven evidence.Key components of data science include:Data Collection: Gathering relevant and structured data from various sources, such as databases, sensors, websites, social media, and more.Data Cleaning and Preprocessing: Ensuring data quality by eliminating errors, inconsistencies, and missing values. This step prepares the data for further analysis.Data Exploration and Visualization: Using exploratory data analysis and visualization techniques to understand the characteristics and patterns within the data.Statistical Analysis: Applying statistical methods to derive meaningful insights and make predictions based on the data.Machine Learning: Implementing algorithms and models that can learn from data, identify patterns, and make predictions or classifications.Data Interpretation and Communication: Interpreting the results of data analysis and presenting the findings in a comprehensible manner to stakeholders.In this articlePart 1: 30 data science quiz questions & answersPart 2: Download data science questions & answers for freePart 3: Free online quiz creator – OnlineExamMakerPart 1: 30 data science quiz questions & answers1. Question: What is the process of converting raw data into a structured format for analysis?a) Data Visualizationb) Data Miningc) Data Wranglingd) Data InferenceAnswer: c) Data Wrangling2. Question: Which of the following is not a supervised learning algorithm?a) Decision Treesb) Linear Regressionc) K-Nearest Neighbors (KNN)d) K-Means ClusteringAnswer: d) K-Means Clustering3. Question: In data science, what does “EDA” stand for?a) Exploratory Data Analysisb) Experimental Data Assessmentc) Essential Data Analyticsd) Extrapolated Data ArrangementAnswer: a) Exploratory Data Analysis4. Question: What technique is used to reduce the number of features in a dataset while preserving important information?a) Principal Component Analysis (PCA)b) Regression Analysisc) Recursive Feature Elimination (RFE)d) T-Distributed Stochastic Neighbor Embedding (t-SNE)Answer: a) Principal Component Analysis (PCA)5. Question: Which evaluation metric is commonly used for binary classification problems?a) Mean Absolute Error (MAE)b) Mean Squared Error (MSE)c) F1 Scored) R-Squared (R2)Answer: c) F1 Score6. Question: Which algorithm is particularly well-suited for handling imbalanced datasets in classification tasks?a) Decision Treesb) Random Forestc) Support Vector Machines (SVM)d) Naive BayesAnswer: b) Random Forest7. Question: Which data type represents categorical data that has an inherent order or rank?a) Ordinalb) Nominalc) Continuousd) DiscreteAnswer: a) Ordinal8. Question: Which data visualization is best suited toCustomized F1-Score for multi-class classification
Data augmentation involves generating synthetic samples that resemble those in a given dataset. In resource-limited fields where high-quality data is scarce, augmentation plays a crucial role in increasing the volume of training data. This paper introduces a Bangla Text Data Augmentation (BDA) Framework that uses both pre-trained models and rule-based methods to create new variants of the text. A filtering process is included to ensure that the new text keeps the same meaning as the original while also adding variety in the words used. We conduct a comprehensive evaluation of the framework's effectiveness in Bangla text classification tasks. Our framework achieved significant improvement in F1 scores across five distinct datasets, delivering performance equivalent to models trained on 100% of the data while utilizing only 50% of the training dataset. Additionally, we explore the impact of data scarcity by progressively reducing the training data and augmenting it through BDA, resulting in notable F1 score enhancements. The study offers a thorough examination of BDA's performance, identifying key factors for optimal results and addressing its limitations through detailed analysis. PDF Abstract Code Tasks Datasets Results from the Paper Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers. Methods No methods listed for this paper. Add relevant methods here. The F1 score is commonly used to measure performance of binary classification, but extensions to multi-class classifications exist. Why is the F1 Score Important? The F1 score is a popular
How To Calculate F1-Score For Multilabel Classification?
(TN) is the number of correctly predicted cats, and False Positive (FP) is the total number of cats predicted as dogs. Precision points out how accurate a model is when it comes to images that are predicted as dogs, while recall calculates how many of the actual dogs are labeled as dogs. Accuracy is a measure of correctly predicted labels in comparison to all the predictions. Our framework was evaluated based on precision, recall, f1_score, and accuracy metrics. 5. Experimental Results 5.1. Species ClassificationIn the animal detection and species classification layers, the COCO dataset’s cat and dog category, BC SPCA, Standford Dog Dataset, and augmentation techniques were used for training. Images collected from Kaggle were used to test the model.Table 1 shows the confusion matrix of the classification on the COCO dataset. We achieved 85.92% accuracy in training and 98.18% in testing. It can be inferred that for both classes, we maintain high precision and recall throughout most of the range of thresholds. The classification reports are also shown in Table 2 in detail. The difference in the accuracy of training and testing, shown in Table 2, is due to the difference in the source of the datasets. As stated above, the COCO dataset was a strict data for training, since there are many images in which the animal is not clearly visible or just a small part of the animal is detectable. In contrast, the images from the Kaggle dataset show more of the animal and less noise.Table 2 shows performance metrics for training and testing in the species classification layer which are precision, recall, f1-score, training accuracy, and testing accuracy (for both COCO and Kaggle datasets). 5.2. Identification and RecommendationsThe Flickr-dog dataset, pet owners’ collected dataset, BC SPCA individual animals’ data, and crawled open-source images were used for recognizing and identifying the animals. Furthermore, data augmentation techniques were applied. Before any comparison, the animal must be registered to the database, and an ID must be assigned to it. This ID is unique and differentiates animals from each other. The registration is completed only using a single image ofclassification - Can the F1 score be equal to zero?
Neural Network) model for sentiment analysis of movie reviews and achieved 81.5% accuracy. The results illustrate that using CNN was an appropriate replacement for state-of-the-art methods. Authors [127] have combined SST and Recursive Neural Tensor Network for sentiment analysis of the single sentence. This model amplifies the accuracy by 5.4% for sentence classification compared to traditional NLP models. Authors [135] proposed a combined Recurrent Neural Network and Transformer model for sentiment analysis. This hybrid model was tested on three different datasets: Twitter US Airline Sentiment, IMDB, and Sentiment 140: and achieved F1 scores of 91%, 93%, and 90%, respectively. This model’s performance outshined the state-of-art methods.Santoro et al. [118] introduced a rational recurrent neural network with the capacity to learn on classifying the information and perform complex reasoning based on the interactions between compartmentalized information. They used the relational memory core to handle such interactions. Finally, the model was tested for language modeling on three different datasets (GigaWord, Project Gutenberg, and WikiText-103). Further, they mapped the performance of their model to traditional approaches for dealing with relational reasoning on compartmentalized information. The results achieved with RMC show improved performance.Merity et al. [86] extended conventional word-level language models based on Quasi-Recurrent Neural Network and LSTM to handle the granularity at character and word level. They tuned the parameters for character-level modeling using Penn Treebank dataset and word-level modeling using WikiText-103. In both cases, their model outshined the state-of-art methods.Luong et al. [70] used neural machine translation on the WMT14 dataset and. The F1 score is commonly used to measure performance of binary classification, but extensions to multi-class classifications exist. Why is the F1 Score Important? The F1 score is a popularPrecision, Recall and F1 Score for Multiclass Classification
Test might show 95% accuracy if 95% of subjects are healthy, even if it misses all cancer cases.Complementary Metrics: Use precision (TP / (TP + FP)) and recall (TP / (TP + FN)) alongside accuracy for a holistic evaluation.Context Matters: In some applications (e.g., fraud detection), reducing false negatives may be more critical than overall accuracy.Frequently Asked QuestionsHow to Calculate Accuracy for a Classification Model?Suppose a model classifies 200 images as “cat” or “dog”:TP: 80 (cats correctly identified)TN: 90 (dogs correctly identified)FP: 10 (dogs misclassified as cats)FN: 20 (cats misclassified as dogs)Accuracy=80+9080+90+10+20=170200=85%\text{Accuracy} = \frac{80 + 90}{80 + 90 + 10 + 20} = \frac{170}{200} = 85\%What Is the Difference Between Accuracy and Precision?Accuracy measures overall correctness, while precision focuses on the proportion of true positives among all positive predictions. For instance, a weather forecast with 90% accuracy might have lower precision if it often predicts rain incorrectly.Can Accuracy Be 100%?Yes, but only if there are no false positives or false negatives. In practice, 100% accuracy is rare due to measurement errors or overlapping data distributions.Why Is Accuracy Misleading in Fraud Detection?Fraudulent transactions are rare (e.g., 0.1% of all transactions). A model predicting “no fraud” for all cases would achieve 99.9% accuracy but fail to detect fraud. Metrics like recall or F1-score are more informative here.How Does Sample Size Affect Accuracy?Larger samples reduce random errors. For example, testing 10,000 patients instead of 100 provides a more reliable accuracy estimate for a medical test.Applications of Accuracy CalculatorsHealthcare: Evaluating diagnostic tests for diseases.Manufacturing: Assessing product quality control processes.Machine Learning: Validating model performance during training.Environmental Science: Measuring pollutant detection efficiency.Comments
A classification report is a performance evaluation metric in machine learning. It is used to show the precision, recall, F1 Score, and support of your trained classification model. If you have never used it before to evaluate the performance of your model then this article is for you. In this article, I will take you through an introduction to the classification report in machine learning and its implementation using Python.It is one of the performance evaluation metrics of a classification-based machine learning model. It displays your model’s precision, recall, F1 score and support. It provides a better understanding of the overall performance of our trained model. To understand the classification report of a machine learning model, you need to know all of the metrics displayed in the report. For a clear understanding, I have explained all of the metrics below so that you can easily understand the classification report of your machine learning model:MetricsDefinitionPrecision Precision is defined as the ratio of true positives to the sum of true and false positives. Recall Recall is defined as the ratio of true positives to the sum of true positives and false negatives. F1 Score The F1 is the weighted harmonic mean of precision and recall. The closer the value of the F1 score is to 1.0, the better the expected performance of the model is. Support Support is the number of actual occurrences of the class in the dataset. It doesn’t vary between models, it just diagnoses the performance evaluation process. Hope you now understand what a classification report is in machine learning. Now in the section below, I will walk you through its implementation using Python.Classification Report using PythonTo view the classification report of a machine learning model, we must first train a machine learning model. In the code below, I first trained a very simple machine learning model to classify spam messages and to evaluate its performance I have used a classification report using Python: precision recall f1-score support ham 0.99 0.99 0.99 1587 spam 0.93 0.92 0.92 252 accuracy 0.98 1839 macro avg 0.96 0.95 0.96 1839weighted avg 0.98
2025-04-141. Lewis Hamilton 2. Sebastian Vettel 3.Valtteri Bottas GP CANADA F1/2016Event: Canadian Grand PrixTrack: Circuit Gilles VilleneuveWeather: 13°C Dry & CloudedTarmac: 27°C DryHumidity: 55%Lewis Hamilton won his 44th Formula 1 Grand Prix in Canada today.Ferrari driver Sebastian Vettel finished 2nd and scored his 83rd podium.He could have won the race if he did the same tyre strategy as Hamilton.Williams driver Valtteri Bottas was able to steel 3rd place from Max Verstappen who was driving an epic race by holding off Nico Rosberg in his much faster Mercedes for 4th place.Read more in our 2016 Canadian F1 GP report.See all details about F1 statistics in current 2016 F1 championship standings and our Top 100 all time driver rankings list.F1 classification 2016 Canadian GPPNoDriverTeamTimeLapsGridPts144 Lewis Hamilton Mercedes01:31:05.2967012525 Sebastian Vettel Ferrari01:31:10.30770318377 Valtteri Bottas Williams01:31:51.71870715433 Max Verstappen Red Bull01:31:58.3167051256 Nico Rosberg Mercedes01:32:07.3897021067 Kimi Räikkönen Ferrari01:32:08.313706873 Daniel Ricciardo Red Bull01:32:08.9307046827 Nico Hülkenberg Force India01:31:19.0966994955 Carlos Sainz Toro Rosso01:31:24.787691521011 Sergio Pérez Force India01:31:27.387691111114 Fernando Alonso McLaren01:31:54.889691001226 Daniil Kvyat Red Bull01:32:00.761691601321 Esteban Gutierrez Haas01:31:05.78468130148 Romain Grosjean Haas01:31:06.35668140159 Marcus Ericsson Sauber01:31:14.346682101620 Kevin Magnussen Renault01:31:28.002682201794 Pascal Wehrlein Manor01:31:35.776681801812 Felipe Nasr Sauber01:31:48.091681901988 Rio Haryanto Manor01:32:19.87368200DNF19 Felipe Massa WilliamsOverheating3580DNF30 Jolyon Palmer RenaultWater leak16170DNF22 Jenson Button McLarenEngine9120✅ Check out 2016 F1 Championship Standings✅ Check out 2016 F1 Teams & Drivers✅ Check out All Time F1 Drivers Rankings✅ Check out All Time F1 Driver Records✅ Check out All Time F1 Teams RankingsPlease share this on social media:✅ Check out more posts with related topics:2016 F1 seasonCanadian GPF1 ClassificationsRacing Results
2025-04-10Is generally accepted that CE fireworks offer better performance at their respective viewing distances and the return to more powerful fireworks that successive BS iterations had watered down.Although CE is an EU-wide classification, the UK government has still insisted that various fireworks that our European friends enjoy are still banned in the UK. That includes aerial shells, bangers, screech rockets and airbombs.At the time of writing (2023) it is still unclear what the implications of Brexit are on all of this. Members of the fireworks industry have indicated that they will be working towards a replacement in time, but until then, CE will remain in place. I’ll update this article if I learn of any news.An older BS fireworks label. Now illegal to be sold by any retailer.Category F1 fireworksThis classification is given to very small fireworks requiring a minimal safety distance (often given as 1m). Examples include some types of indoor firework and novelty items.A Category F1 warning label. Note the 1m safety distance.Category F4 fireworksFireworks for professional-only use are given the F4 category. These are not available to members of the public and are for trained display operators only. Most Category F4 fireworks do not have an explicit safety distance since it is down to the display operator to correctly set up and use them; many do not even have a delay fuse as they are intended to be electrically fired. Often called “industrial fireworks” by the press, Category F4 fireworks would clearly be very dangerous in untrained hands.Contrary to popular myth there is no such thing as a “licence” you can buy or train for that allows you, as a member of the public, to buy or use Category F4 fireworks. You can read more about this in my aerial shells and Category F4 fireworks article.1.3G, 1.4G, HT3 and HT4Further complicating matters is 1.3G and 1.4G. This is a classification given to fireworks that relates to their potential hazard and this is shown in a big orange diamond on the side of the firework’s outer carton. This relates to transportation and packaging, with 1.3G being “more hazardous”
2025-03-27(Figure courtesy of Devyn Kelly). Figure 3. An illustration of landmarking key points for cats and dogs. (Figure courtesy of Devyn Kelly). Figure 4. An example illustration of the face detection, cropping, and alignment of the model. (Figure courtesy of J. Marie Brown). Figure 4. An example illustration of the face detection, cropping, and alignment of the model. (Figure courtesy of J. Marie Brown). Table 1. Confusion Matrix of the animal species classification layer. Table 1. Confusion Matrix of the animal species classification layer. PredictedDogCatActualDog267958Cat402634 Table 2. Performance measurements for training and testing in the species classification layer. Table 2. Performance measurements for training and testing in the species classification layer. MetricResultTraining accuracy on COCO dataset85.92%Testing accuracy on Kaggle dataset98.18%Testing precision on Kaggle dataset97.88%Testing recall on Kaggle dataset98.52%F1 score on Kaggle dataset98.19% Table 3. Accuracy report on face and body identification for top 10 recommendations. Table 3. Accuracy report on face and body identification for top 10 recommendations. Nth RecommendationFace Identification AccuracyBody Identification AccuracyFace and Body Identification Accuracy180%81%86.5%214%15%10.5%33%2%2%4 to 103%2%1% Table 4. Performance comparison. Table 4. Performance comparison. MetricIdentification Accuracy Using FaceOverall Accuracy(Soft Biometrics)StudyMethod by Kenneth et al. [14]78.09%80%Our method84.94%92% Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
2025-04-20Data augmentation involves generating synthetic samples that resemble those in a given dataset. In resource-limited fields where high-quality data is scarce, augmentation plays a crucial role in increasing the volume of training data. This paper introduces a Bangla Text Data Augmentation (BDA) Framework that uses both pre-trained models and rule-based methods to create new variants of the text. A filtering process is included to ensure that the new text keeps the same meaning as the original while also adding variety in the words used. We conduct a comprehensive evaluation of the framework's effectiveness in Bangla text classification tasks. Our framework achieved significant improvement in F1 scores across five distinct datasets, delivering performance equivalent to models trained on 100% of the data while utilizing only 50% of the training dataset. Additionally, we explore the impact of data scarcity by progressively reducing the training data and augmenting it through BDA, resulting in notable F1 score enhancements. The study offers a thorough examination of BDA's performance, identifying key factors for optimal results and addressing its limitations through detailed analysis. PDF Abstract Code Tasks Datasets Results from the Paper Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers. Methods No methods listed for this paper. Add relevant methods here
2025-03-29