Science topics: Data MiningClassification
Science topic
Classification - Science topic
The systematic arrangement of entities in any field into categories classes based on common characteristics such as properties, morphology, subject matter, etc.
Questions related to Classification
Keeping Murphy's Law, the KISS principle, and Popper's Logic, right, left, and center, respectively, as well as the all-encompassing muses of insight, innovation, intuition, imagination, and insurrection (the 5 I's that holistically, through immersive-integrative multi-disciplinary contemplative approach identifies the noise or separates the wheat-from-the-chaff at the intersection of fact and fiction), and importantly and synergistically compose the whole that is greater than the sum of its parts -- in true Aristotelian fashion -- that govern progress and advance in human thinking through the synapse in all human endeavor, scientific and non-scientific.
I will put exactly 50-years of my part in one of the greatest mysteries ever faced by humans, and that will follow this species indefinitely to perpetuity, but with secure and fearless knowledge through application of principles or laws of theory and therapy, elimination of canonical or Institutional myths and assumptions, with a complete unwinding of this humungous Gordian knot of neuro-ophthalmology.
Da Vinci guarded against excessive use of words to describe any entity or anything. Migraine is an entity of excess -- incidence, words, data, statistics, analyses, meta-analyses, hypotheses, viewpoints, perspectives, Editorials, Medical Conferencing Abstracts, invited Lectures, hyper-splitting of nosology, and Letters-to-the-Editor, all claiming to know a slice of truth or presumed truth about migraine with a hyper-exponential absolutely unlimited untrammeled expansion. Quo vadis is not even a remote concept.
I, in the Third Millennium, describe the 'what' of migraine in 6-10 words, a definition that will last to perpetuity:
Migraine is the delayed outcome of an oculo-cephalic autonomic storm (causing the non-homonymous scintillating scotoma as well as the lateralizing headache). More succinctly, migraine is an oculo-cephalic autonomic storm.
Nothing is static. No theory or therapy cannot be improved. The core of migraine is here.
With the cause-effect mechanisms in migraine pathophysiology fully described, what has been missing for 6 millennia or more is presented right here and now.
The doors of perception for cluster headache and other indomethacin-responsive headaches are now open.
Reversal of the hyper-split classification of primary headaches is imminent, leading to a holistic comprehensive understand of a large section of medicine and neuroscience.
06-MAY-2024
New Delhi
ORCID iD: 0000-0002-6770-5916
Pediatric supra condylar fractures sometimes show rotation after fixation. It may be significant in some cases. What is the normal range. What is significance of gordons classification.
Within a project about geographical traceability of horticultural products, we would like to apply classification models to our data set (e.g. LDA) to predict if it is possible to correctly classify samples according to their origin and based on the results of 20-25 different chemical variables.
We identified 5 cultivation areas and selected 41 orchards (experimental units) in total. In each orchard, 10 samples were collected (each sample from a different tree). The samples were analyzed separately. So, at the end, we have the results for 410 samples.
The question is: the 10 samples per orchard have to be considered pseudoreplicates since they belong to the same experimental unit (even if collected from indepedent trees)? Should the LDA be performed considering 41 replicates (the 41 orchards, taking the average of the 10 samples) or should we run it for the whole dataset?
Thank you for your help.
Please provide an explanation according to the classification of primary and secondary uranium ore
Dear researchers,
a variability in the taxonomy classification of microbial communities when using different primer pairs (e.g. for 16S rDNA) is commonly known. However, the mismatches to these primers are not described as the major reason for this bias. My question is: what are other possible causes of this bias and which is now supposed to be the major one?
Thank you for your contribution. Lucie
I am inclined to research on EEG classification using ML/DL. The research area seems saturated. Hence, I am confused as to where I can contribute.
During my research for my bachelors thesis into the classification of mining in the Reichsgruppe under the Nazi regime, I came across two different classifications:
- On the one hand, Boris Gehlen in his chapter "3.10 Energy industry" (in https://www.degruyter.com/document/doi/10.1515/9783110796353-016/html?lang=de) assigned mining to the Reichsgruppe "Energy Industry", which sounds logical to me.
- On the other hand, I have found some works that explain that mining belonged to the Reichsgruppe "Industry": https://www.degruyter.com/document/doi/10.1524/jbwg.1980.21.1.177/html
So which is correct?
I can only explain this apparent double classification by the central importance of mining for both economic sectors: On the one hand as a supplier of raw materials for industrial production and on the other as a key sector for energy supply.
Is it possible that in different sources and at different times the emphasis was placed more on one or the other affiliation, depending on which aspect of coal mining was in the foreground, so that it can be said that mining belonged to both groups?
What is scope of the implementing LIS classification and cataloguing in different field ?
Should I use the traditional UHI index classification (<0, 0-0.005, 0.005-0.010, 0.010-0.015, 0.015-0.020) where 99% of my area falls under UHI effect, or would a natural classification or classification like this (<0, 0-0.05, 0.05-0.10, 0.10-0.15, 0.15-0.20, >0.20) be more suitable for studying the urban heat island effect in our tropical region?
ما هي معايير التصنيف الأكاديمي للجامعات؟
I have trained all Convolutional Neural Networks (CNNs) from the LeNet-5 model to the EfficientNet model for Benign Tumors and Malignant Tumors for breast masses with large data. The data was Mammogram Images(MI). All of these models give me a test accuracy of 50 %. Why did most journals publish fake results?
These beautiful specimens of Cerambyx were photographed by me, (not captured)
in Abruzzo (Central Italy) loc. Sulmona. I was able to identify the genus but not the species.
Entomologists help me, I am looking for an exact classification.
Supervised Learning
In supervised learning, the dataset is labeled, meaning each input has an associated output or target variable. For instance, if you're working on a classification problem to predict whether an email is spam or not, each email in the dataset would be labeled as either spam or not spam. Algorithms in supervised learning are trained using this labeled data. They learn the relationship between the input variables and the output by being guided or supervised by this known information. The ultimate goal is to develop a model that can accurately map inputs to outputs by learning from the labeled dataset. Common tasks include classification, regression, and ranking.
Unsupervised Learning
Unsupervised learning deals with unlabeled data, where the information does not have corresponding output labels. There's no specific target variable for the algorithm to predict. Algorithms in unsupervised learning aim to find patterns, structures, or relationships within the data without explicit guidance. For instance, clustering algorithms group similar data points together based on some similarity or distance measure. The primary goal is to explore and extract insights from the data, uncover hidden structures, detect anomalies, or reduce the dimensionality of the dataset without any predefined outcomes. Supervised learning uses labeled data with known outcomes to train models for prediction or classification tasks, while unsupervised learning works with unlabeled data to explore and discover inherent patterns or structures within the dataset without explicit guidance on the expected output. Both have distinct applications and are used in different scenarios based on the nature of the dataset and the desired outcomes.
I am looking for a high-resolution dataset (alternative to ImageNet) that has classes with sub-groups. I need this dataset for the domain transfer experiment. Basically, I will be using DNN pre-trained on ImageNet to extract features. Example: in CIFAR100, 100 classes are grouped into 20 super classes, such that each super classes have some sub-classes. I need similar dataset, but it has to be high resolution. Can you suggest any suggest one?
I am using Google Earth Engine for LULC classification map. For this purpose I have used smile random forest classifier to classify the Landsat 7 Top of Atmosphere data. Now could you please tell me how can I validate the LULC classification map?
What is the pixel classification for different land use in an NDBI map for hilly areas?
Can i cluster documents to label them as a first step. Then in the second step, can I use the labelled documents to apply a classification method such as svm, knn, etc.?
What is the most appropriate classification for the 12 identified compounds from the essential oil of a medicinal plant, which include:
1. Camphene,
2.Para-Cymene,
3. 1-Limonene,
4.Gamma-Terpinen,
5.Trans-Decalone,
6. Cuminic Aldehyde,
7. Cyclopentanone,
8. Acetyl phenyl carbinol,
9. 1-amino-1-o,
10. 5-Methyl-2-Phenyl indolizine,
11. Silicic acid,
12. 5-nitrobenzofuran-2-carboxylic
Is the categorization in the attached image correct?
Need help with an unsupervised deep image stacking project. Image stacking is a commonly used technique in astrophotography and other areas to improve the signal-to-noise ratio of images. The process works by first aligning a large number of short exposure images and then averaging them which reduces the noise variance of individual pixels. I have to do this process with neural networks by predicting a distortion field for each image and using a consistency objective that tries to maximize the coherence between the undistorted images in the stack and the final output. I need some learning materials for performing image stacking preferably in python and make a neural network. I already have experiences with training object classification and detection models and have worked on different YOLO models.
Hyperspectral Imaging, Hyperspectral Classification, Statistical Test
Hello everyone,
I have applied 1D CNN on speech emotion recognition, when I shuffled columns I got diffrent results, for example, using matrix(:,[1 2 3]) gives different classification results than matrix(:,[2 3 1]) which should be the same, I have tried rng("default") but it hasn't resolved the issue. Can someone please assist me with this?
Thank you in advance!
Hi. I am currently working on two deep learning research as final semester undergraduate student. In order to ensure the quality and acceptability of my work in a well known journal paper can someone provide me any guideline, hints and tips. As for my previous experience i have worked in multiple conference papers.
Tips and hints can include this or anything else that you think is necessary:
1. What are the common mistakes I should avoid?
2. What should i always include and not include?
3. How do i choose a good journal to publish my work?
and etc etc.
Dear Antonella Petrillo, Valerio Antonio Pamplona Salomon, Claudemir Leif Tramarico
I read your paper
State-of-the-Art Review on the Analytic Hierarchy Process with Benefits, Opportunities, Costs, and Risks
These are my comments
1- In the abstract you say “Aggregation approaches and outranking approaches are better classifications”
I agree with this classification better that “American vs. European schools”. For your information there are methods that apply both.
2- In page 2 “The choice of an MCDM method should be based on characteristics of the decision problem”
I also agree with this, but unfortunately, practically in all MCDM methods, some characteristics are ignored in the modelling due to the inability of methods to cope with them. For instance, resources and limitations, inclusive and exclusive alternatives, precedence, time, binary variables, etc.
In my opinion, the choice of a method is simply: Choose the MCDM method that best adjust to the characteristics of your problem.
3- “One main reason for the AHP’s leadership in MCDM is its solid mathematical foundation”
This is inexact. AHP does not have any mathematical foundation, except in the use of Eigen values.
Let’s see, why I say this. Do you think that there is mathematical foundation by:
a) Using pair-wise comparisons? No mathematical supportand it is a highly criticized procedure.
b) Assigning values to criteria based on intuition? Is this scientific, and what happens if other DM thinks different?
c) Accepting that the final decision of the DM is controlled by a formula, and forcing the DM to correct her/his own estimates? So, a formula, to get transitivity, supersedes the honest findings of the DM,
d) Assuming that criterion trade-offs are equivalent to criteria weights? These are two different concepts
e) Assumming that what is in the mind of the DM is applicable to the real-life, and thus accepting that it is also transitive? What kind of mathematics supports this?
f) Using a logarithmic table, the ‘Fundamental scale’, based on the Weber and Fechter laws, on stimulus and results, and then AHP comparing invented weights to stimulus?
The Dictionary defines stimulus as “Physiology, Medicine/Medical. something that excites an organism or part to functional activity”
Not even a remote relationship with the ‘weight’ concept.
g) AHP is unable to deal with complex scenarios, because its rigid lineal hierarchical structure that cannot represent transversal relationships.
Some AHP drawbacks were refuted by Dyer in the 90s. and that Saaty responded, but nothing can be extracted from those rebuttals. To be fair, rank reversal was discovered in AHP, but it is present in all MCDM methods, not only in AHP
4- You talk about BOCR as it were something new, when it started in the 50s, when the old C/B analysis was considered no longer appropriate.
Why the four criteria BOCR are mutually exclusive? Normally they are considered in the set of criteria. MCDM is not looking for optimality,since normally, it does not exist. All MCDM methods look for a balance between opposite criteria like B and C.
Exclusivity means that BOCR cannot be together and this is not realistic, nor practical, because it is a common feature in most scenarios. If you want more information, I will be glad to supply examples, albeit not using AHP
You are mistaken. A criterion can be used twice, for instance, a criterion asking for minimization, and the same criterion, with the same values, asking for minimization. I use it frequently. The software must find the equilibrium between those extreme values.
You talk about ‘important criteria’? And how do you select those criteria? Just by the weights values? There is not a mathematical support for that. It is intuitive, no more than that.
And what if there is redundancy? Which is the effect? From the mathematical point of view, none.
You are referring to AHP but at the same time make references to ANP.
There is a large difference, since the ANP structure is able to handle complex scenarios because it works with a network. Probably Saaty developed it reckoning the limitations of AHP.
5- Page 3 Figure 1. Sorry, but you cannot apply AHP to this problem; AHP theory explicitly says that all criteria MUST be independent, which is not the case in your example, quite the opposite, there are many transversal interrelationships.
6- In page 4 “First, this alternative may be too risky compared to alternatives one and two”
Obviously, you do not consider that an alternative may be too risky, but also it may have some properties that compensate for this risk.
I am not referring to the compensation issue used in weights. The problem with AHP and other methods, is that elements of the decision matrix are considered in isolation, when in reality, according to System’s Theory and reasoning, they have to be considered as a whole, holistically. For instance, you can reduce risk by increasing costs or/and decreasing benefits. Therefore, you have to consider both at the same time.
I hope my comments may help
Nolberto Munier
Discover ways to convert
Using artificial intelligence teaching
From ad hoc to automatic classification
How can teachers?
Integrating artificial intelligence into their classrooms.
Where can I find and download the following statistical data?
- 2012-2022, China, statistical data related to "dishonest persons subject to enforcement."
- 2012-2022, China, institutional documents related to the establishment of the credit system and punitive mechanisms.
- 2012-2022, China, classification statistical data for various types of patents.
- 2012-2022, China, classification statistical data for the publication of academic research papers.
- 2012-2022, China, classification statistical data for business registration.
- 2012-2022, China, statistical data related to innovation and entrepreneurship.
- 2012-2022, China, statistical data related to the population involved in innovation and entrepreneurship.
Classify different regions or countries based on their SOC levels and trends, and provide an analysis of how these classifications relate to the specific challenges and priorities they face in achieving the SDGs, particularly in combating land and soil degradation.
Hello researchers!
I am currently working on the classification of objects in an image, I want to use the multiLabel classification method with matlab.
So for 7 classes (groups) of objects I would like someone to offer me a code.
Hello researchers!
I am currently working on the classification of objects in the same image, I want to use the multiLabel classification method with matlab.
So for 7 classes (groups) of objects I would like to be offered a code to generate an internal table which displays "1" for an object present in the image and "0" otherwise. Likewise for multiple objects in the same image
Can any one let me know Classifications of Optimization Techniques are useful for Current research trends? How to choose our suitable Optimization Technique for our research problem.
I am looking for research focused on classifying Arabic text into ( Verb / Noun/ Letter )
most of what I found is stemming, deep learning stuff but not word classification.
Any help please ?!
I am currently working on a prediction-project where I am using machine learning classification techniques to do this.
I have already computed various classification metrics like accuracy, precision, recall, AUC-ROC, and F1 score. What I am struggling with is how to (objectively) interpret these metrics in terms of their quality. For instance, in frequentist statistics, there are established ranges for interpreting effect sizes (e.g., small, medium, and large effect sizes).
Is there a similar set of guidelines or conventions with a citable source for interpreting classification metrics? I am particularly interested in categories like "poor", "sufficient", "satisfactory", "good", and "excellent" or similar.
I understand that the context and the specific task are crucial for any interpretation, but I still need a citable source that provides general guidelines, especially because this involves an educational context.
Thank you in advance!
Im trying to create an image classification model that classifies plants from an image dataset made up of 33 classes, the total amount of images is 41,808, the images are unbalanced but that is something me and my thesis team will work on using Kfold; but going back to the main problem.
The VGG16 model itself is from a pre-trained model from keras
My source code should be attached in this question (paste_1292099)
The results of a 15-epoch run is also attached as well
what I have done so far is changing the optimizers from SGD to Adam, but the results are generally the same.
Am I doing something wrong or is there anything I can do to improve on this model to get it to atleast be in a "working" state, regardless if its overfitting or the like as that can be fixed later.
This is also the link to our dataset:
It is specifically a dataset consisting of Medicinal Plants and Herbs in our region with their augmentations. The are not yet resized and normalized in the dataset.
Dear colleagues
Good day to you all. One of the most famous debates in Igneous Petrology is the relation between the diamondiferous rocks (i.e. lamprophyres, lamproites, orangeites and kimberlites). In 1991, the late geologist Nick Rock considered them to have similar petrological and geochemical signatures and were included in one group named the "Lamprophyre Clan". Recent publications have shown that relations do exist (e.g. see The "Lamprophyre Clan" Revisited 2022 paper in ResearchGate). The Version of Record is available online at: https://doi.org/10.1007/s12594-022-2153-4. One can also read the Version of Record through the Springer SharedIt link https://rdcu.be/cVljF (note that you need to use Wi-Fi in order to open the Springer SharedIt link).
On the other hand igneous petrologist Roger Mitchell, who disagreed with the idea, proposed the "Lamprophyre facies" concept which includes rocks that formed under volatile-rich conditions. Which one is correct? GPT-4 was also asked. The answer was that both terms can be correct, but they represent different perspectives in the study of these rocks. What is your opinion? Please comment.
Best regards
Ioannis Kamvisis
I would like to ask for help with the classification of native or invasive species, especially those that are classified as NATIVE in Brazil but are classified as INVASIVE in other countries.
For example: Euphorbia prostrata is a species native to Brazil, but considered invasive in other countries. What would be the correct classification for this species: NATIVE; INVASIVE; or NATIVE/INVASIVE?
This raises a question because the species is both native and invasive at the same time.
Hello:
I have a question about the definition of adaptive control, since I´m researching about making a model-free adaptive control system. I will appreciate your help.
The definition I found says that an adaptive control modifies its parameters or structure in order to achieve a performance index. Reading about the subject in different sources I noticed that when they refer to an adaptive controller, it always has a model of the plant and implies an adaptation law which is usually obtained by taking the model and manipulating expressions.
My deduction is that when these sources refers to adaptive control, is about a kind of this instead. Am I right? I will really appreciate your support on this.
Thanks.
Pablo.
What are the applications of an embedded system and what is classification and requirements of embedded system?
Does the large number of comorbidities in current mental illness suggest that the current classification of illness is problematic? What diagnostic classification criteria do psychiatrists need to identify and treat disorders?
Classification of rosehip (Rosa canina L.) genotypes according to different usage purposes and further breeding objectives
- July 2023
- DOI:
- 10.21203/rs.3.rs-3174428/v1
- License
- CC BY 4.0
- Melekber Sulusoglu Durul
- Kerem Mertoglu
- Nazan Korkmaz
- Show all 5 authors
- İbrahim Bulduk
Dear Editor,
This article was rejected by this journal and we need to remove itfor re-load another journal. We tried to remove this pre-print from our pages. Unfortunately it seems again and again. Please could you help us to remove it completely?
My best regards..
Melekber SULUSOGLU DURUL
Why CNN is better than SVM for image classification and which is better for image classification machine learning or deep learning?
What deep learning algorithms are used for image processing and which CNN algorithm is used for image classification?
What are the classifications of agrivoltaics and how Agrivoltaics: A smart farming strategy to boost farmers’ income?
To achieve the desired power quality, the causes of disturbances are required to be identified and adjusted through the detection and classification of various power quality disturbances.
Hi everyone, I'm working on skin cancer classification and I've extracted the feature from three pre-trained CNN models and concatenated all the features. Finally a dense layer with softmax function was used for classification.
But I encountered constant validation accuracy with lower training accuracy.
How can I solve this problem please. Despite this, I have tried many optimizer functions with different learning rate, regularizer, early_stopping, dropout, and also I used image augmentation
Which machine learning algorithm is specifically designed for binary classification problem?
Hello community,
I am new to Random Forest. I understand how it is trained with random selection of features in each split, and so on. In the end we have n_trees, each of which will give a different estimate.
All codes and tutorials and papers I read so far (were not many, I confess) get solely one output, the average in case of regression or the most frequent class in case of classification.
I am very much interested in the distribution of values that all the n_trees give. Is there a theoretical reason why one should NOT do this? Is it conceptually not meaningful somehow?
In any case, does someone know how to get those values, if I want? I didn't find how to do this with R party and I'm currently still migrating to Python SKLearn.
Thank you very much and best regards!
Seeking insights from the research community: Does the imbalance of textual genres within corpora, when used as an explanatory variable rather than the response variable, affect the performance of logistic regression and classification models? I'm interested in understanding how the unequal distribution of word counts across genres might introduce biases and influence the accuracy of these machine learning algorithms. Any explanations or relevant details on mitigating strategies are appreciated. Thank you!
I need to know about the deep learning algorithms used in land cover classification and which one is best suited. Planning to use Sentinel-2 satellite images.
I also want to know about GAN in Landcover classification.
What are approaches to classify patent data using deep learning? (document, text, word, labels )
How patent classification using CNN, DNN, RNN ?
Is transfer learning is effective in patent classification?
Like what are the sexquioxide classification of laterite?
Hello everyone, I am very curious about the out of plane and in plane of SU8 photoresist in the material classification. What is the difference between them?
please share the physicochemical properties of mirogabalin besylate.
As a researcher, image classification is an area where images are the primary data used in various domains such as agriculture, health, education, and technology. However, ethical considerations are a significant concern in this area. How should ethics be handled and taken into account?
Why is agricultural diversification essential for sustainable livelihoods and what are the classification and characteristics of agriculture market in India?
I am starting with AI, training, detection and classification of small objects like loosened nut, bolts etc on a railway track. What should be the ideal way to collect dataset ( images )?
1. Does background matters in pictures, shall I use multiple background
2. Shall all images come from same height but different angles, or varying height is also important.
3. Or top angle from various heights is more effective
And what should be the best way to train the model?
1. with any dataset, iteratively training the model, by manually correcting every incorrect inference
2. or using a very large dataset without iterations
I want to annotate each gene in the Homo sapiens taxon with its respective GO terms and its hierarchical parent terms in the GO database. How can I systematically do that? While I am aware that the obo file contains information such as "is a," "part of," and "regulates," it lacks a comprehensive hierarchy from child GO terms to all their parent terms. Is there an existing method available to achieve this systematic annotation, or do I need to develop a custom script to extract this information from the obo file?
I want to solve imbalanced data issue for classification problem, which technique is probably more effective to solve it?
Is there any clear references to classify the risk level of land subsidence based on annual subsidence rate? Such as 30mm/year or 50mm/year? Much thanks for your time.
In Bio-Signals and Systems we are introduced with quite a number of Biosignals, but what are some classification methods for those signals? Is using CNN one of the classification methods for biomedical signals?
In the case of EMG, motor imagery, or other most of biosignal classification problems, the accuracy improves when more features are added. However, in the case of SSVEP signal classification, everyone is using only one method either MEC or CCA or FFT or PSD. Can we add more frequency domain features with MEC to further improve the results?
Hello everybody, I'm a master degree student. I'm working with 16S data on some environmental samples. After all the cleaning, denoising ecc... now I have an object that stores my sequences, their taxonomic classification, and a table of counts of ASV per sample linked to their taxonomic classification.
The question is, what should I do with the counts for assessing Diversity metrics? Should I transform them prior to the calculation of indexes, or i should transform them according to the index/distance i want to assess? Where can I find some resources linked to these problems and related other for study that out?
I know that these questions may be very simple ones, but I'm lost.
As far as I know there is no consensus on the statistical operation of transforming the data, but i cannot leave raw because of the compositionality of the datum.
Please help
Dear community!
I am currently searching for a text dataset and maybe you can point me in the right direction.
I am looking for a dataset that possibly covers a medical problem (not necessarily), addresses (multilabel) document classification and contains some numeric values in the text. Ideally, the numeric values (e.g. measurements) have an influence on the classification labels (e.g. label is 'fever' when temperature in the text is above X). Of course, large data and many labels would be great, but I am thankful for interesting suggestions.
Thank you!
Roland
few short learning nowadays grooming for classification.
Hi everyone,
Recently, I have read some papers relating to EEG based-neuromarketing. In which, the papers aim to classify customer preference (Like/Dislike a product) using participant's EEG signal. I think I do not really understand:
1. How to label the ground truth preference (Like or Dislike)? If we use self-report to label the data, so what is the meaning of EEG?
2. The first question leads to the second one: how can this be applied in real-world scenarios?
I find it very difficult to find an article that clearly explains those questions. I would greatly appreciate it if you could spare some time to help. Thank you so much!
What is the best way to classify attacks in IIoT? How can we differentiate this classification in IIoT and CPS?
pour la détection de l’apnée du sommeil en utilisant Deep Learning est ce que je peux passer par classification des stades de sommeil puis la détection de l'apnée et comment faire çà l'output du modèle de la classification des stades sommeil comme entre pour le modèle de détection de l'apnée du sommeil pour chaque modèle j 'utilise différent dataset
Hi, I am trying to tune the parameters of a classification problem. Can I use MSE as fitness function in PSO or is it only for regression problems? If yes, what fitness function can be used for classification problems?
Thanks
- Describe the different methods used for implementing land capability classification and land suitability classification, including the use of remote sensing, GIS, and field surveys.
- Analyze the strengths and weaknesses of each method.
- Discuss how they can be combined to create more accurate and comprehensive land use assessments.
- Evaluate the role of community participation and local knowledge in the implementation of these classification systems.
- Discuss the potential benefits of incorporating indigenous knowledge into land use planning and management.
can anyone able to download this paper
Task Offloading with Task Classification and Offloading Nodes Selection for MEC-Enabled IoV
If a study examines the interaction effect of X1 (a dichotomous variable) and X2 (a dichotomous variable) on Y, the mediating variable M plays a mediating role in the interaction effect, and the moderating variable W changes the classification of the independent variable X1 (e.g., from dichotomous to trichotomous). How is this conceptual model figure drawn?
What is the meaning of the news, and what are its classifications?
What is the impact of varying the number of hidden layers in a deep neural network on its performance for a specific classification task, and how does this impact change when different activation functions are used?
I want to do further research on aquatic microplastic classification and quantification. My target areas will be water, sediment, and plankton. I am in the initial stage of my work. I plan to prepare three review articles regarding microplastics within six months for a Scopus index journal. I am seeking collaborations with those in the same field having a similar interest in publication. My inbox is open for discussion. Authorship will be split equally. The corresponding author will be from my side.
Thanks for your time and consideration.
Thanks in advance.
I believe it is very difficult to discriminate slum area with a method other than object based since it has a very heterogeneous reflectance!!!!, but I am interested in pixel based classification. Is there any pixel based classification approach to discriminate slum area?
I am trying to perform unsupervised Kmeans classification in ENVI 5.1 (also tried 5.2), but it results in an homogeneous image (completely dark or red!!!!). Does anyone knows the problem?
I am attaching the result!!!!
The omnibus tests of model coefficients shows that the model is significant. However, the classification table under Block 1 shows the same ratio of observed to predicted values as Block 0. Does this mean that my independent variables do not contribute to the model? I've read online that this problem may arise due to rare occurrence of events. Is there an alternative way to do the classification tables or is it acceptable to report the findings of having same classification table for block 0 and block 1 due to rare occurrence of events? Thank you.
I have a collection of sentences that is in an incorrect order. The system should output the correct order of the sentences. What would be the appropriate approach to this problem? Is it a good approach to embed each sentence into a vector and classify the sentence using multiclass classification (assuming the length of the collection is fixed)?
Please let me know if there can be other approaches.
Notani et al
Epstein et al.
Marx et al.
Glanzmann et al etc
Hello qualitative software users
I am working on this project and I have linked files to cases that have attributes but could not find to select a code to select a certain attribute and see what all interviews with this attribute and this code have together. Is there a way to do this?
Is there any standard to perform landform classification based on curvature and slope using specific values for small agricultural areas (10-20ha)? For instance, to define the following hillslope positions: summit, shoulder, backslope, footslope, and toeslope. Thanks.
Which are the latest deep learning models for zero-shot speech classification ?
please share if you have a thesis that contains a complete methodology for performing the HFACS method to identify unsafe acts? Thank
My team and I are in the middle of a prioritization problem that involves 350 alternatives (see figure for context about alternatives) or so. I have used the AHP to support the decision-making process in the past with only 7 or 8 alternatives and it has worked perfectly.
I would like to know if the AHP has a limit on the number of alternatives, because consistency may become a problem as Dr. Saaty's method provides Random consistency Indexes for matrix sizes of up to 10.
I was thinking in distributing the 350 alternatives in groups of 10, according to an attribute or classification criteria, to be able to use the RI chart proposed by Dr. Saaty.
If there are other more adecuate multi-criteria analysis tools, or different approaches to calculate the RI for larger matrices, please let me know.
Greetings and thank you,
Which are the latest deep learning models for zero-shot image classification ?
We are applying k-means clustering algorithm on an unlabeled data. Our aim is to at the end, pool a result that shows two possibility. Is it necessary then to carry out k-nn classification after the clustering?
Suppose we conveniently extended the standard concept of cellular automaton to include
graphs and state-spaces Q of any cardinality and that the transition function F belonged to a certain adequate notion of "(hyper)computable function". We call this a hyper-cellular automaton HCA.
Consider the postulate: the universe can be described by a HCA with transition function F.
We cannot escape the problem of the initial condition Q_0. In the Wolfram Classification random initial conditions are considered. Hence the expediency for some topology or measure on Q.
Q will include for instance the usual sheaves (principle bundles and connections) considered in the standard model. It will also include other aspects to account for quantum gravity, consciousness, emergent biological complexity, etc.
It is an empirical fact that this HCA must be WC4 "complex patterns of localised structures" in the Wolfram Classification.
A major problem is the goal of reverse engineering F is that we do not have evolutions for other initial conditions at our disposal neither for the universe nor for subsystems of the universe. For physics at least a lot of locality and invariance hypotheses come in to play to justify the universality of experimental conclusions. The chemistry we observe on earth must also be that of the most distant star.
For biology the situation is drastically different. My question is: how can biology go beyond being a merely descriptive science as contrasted with fundamental physics ?
Biology seems to be mainly a "reverse engineering" affair. But it is also important
to have detailed, mathematically precise models - perhaps using HCAs - that can be used to test hypotheses and perform simulations.
Molecular biology suggests a new paradigm for software-hardware, a fluid mobile computer with essentially interconnected parts. A key characteristic is that information operations are tied to material and energetic constraints.
Also we must focus on ecosystems (the analogue of the cell ? ) rather than individual species. What about the idea of a "natural internet" (via horizontal gene transfer, etc.) ?
When using ML for classification tasks in disease diagnosis study, what level of accuracy can be considered enough, high, or a threshold/benchmark?
Android malware classification using machine learning algorithm aim and the objectives?
Hi everyone,
I did a classification work with less number of samples (n=73). I compared different machine learning classifiers but two of them, SVM with poly kernel and Radial Basis Function kernel, produced an overall accuracy with 100%. Hence, is this finding acceptable? if unacceptable, why?
The autonomic seizures (AS) are non recognized easly and it is not clear the origin area and usually without a ictal EEG positive. Someone has video Ictal EEG of AS?
I'm doing research about deixis/deictic and its meanings in pragmatic. I didn't find any journals, books or ppt about pragmatic meanings in deixis, most of it is explained in semantic way, not pragmatic. Are pragmatic meanings in deixis same like speech act classifications? Or we can use implicature theories to determine those pragmatics meanings?
How to determine pragmatic meaning in this dialogue example?
Mother: What do you want for Christmas?
Daughter: I want a violin!
Father: Didn't you want a doll? You said that one week ago.
Daughter: Now I want violin, papa!
If the deixis of that dialogue is you, does that pragmatic meaning is asking the daughter what she wants for Christmas? Or it is semantic means?
Thank you in advance.
Someone can clarify the morphological classification of marine macroalgae used by Littler and Littler (1981).
I have implemented a CNN for my image classification. Now I want to save the features from this model into a local CSV file. How can I do this? here is the example model:
model = tf.keras.models.Sequential([tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(200,200,3)),
tf.keras.layers.MaxPool2D(2,2),
#
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPool2D(2,2),
#
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPool2D(2,2),
##
tf.keras.layers.Flatten(),
##
tf.keras.layers.Dense(1024, activation='relu'),
tf.keras.layers.Dense(512, activation='relu'),
##
tf.keras.layers.Dense(1, activation='sigmoid')
])
Hello,
I am conducting a latent transition analysis in Mplus, and I am examining the relationship between class membership, predictors, and outcomes. I am using a three-step process (Asparouhov & Muthén, 2014). During step 3, after I created model class assignment variables that I used as indicators of the latent classes, the inclusion of predictors in the model (i.e., sex and ethnicity) significantly changed the classification counts and proportions of groups at each time point. From what I understand, creating a modal class assignment variable and accounting for classification errors in the third step of the process should prevent covariates from significantly affecting class membership in the final model. I have read about alternative three-step approaches (e.g., Vermunt and Magidson, 2021) that include covariates that demonstrate DIF in the first step of the model and include all covariates (with and without DIF) in step three of the model. Is this an appropriate method, given that I am interested in the effect of sex and ethnicity on latent class membership, transition probabilities, and outcomes?
Is it possible that DIF could cause a significant change in class membership in the LTA? I have read the work by Masyn (2017), who developed a procedure to address DIF, but her analysis included individual items. I am using subscale scores as indicators of the latent classes. Is it possible to test for DIF using subscales, or does this need to be done at the item level? Also, is there any research that addresses DIF within the context of an LTA? I am a bit confused about whether DIF would be examined for each LPA before constructing the modal class variables to include in the LTA or if it would be addressed just in the LTA model.
I hope this makes sense. Any suggestions would be greatly appreciated.