Science topics: Computer ScienceImages
Science topic

Images - Science topic

Explore the latest questions and answers in Images, and find Images experts.
Questions related to Images
  • asked a question related to Images
Question
1 answer
How do we evaluate the importance of individual features for a specific property using ML algorithms (say using GBR) and construct an optimal features set for our problem.
image taken from: 10.1038/s41467-018-05761-w
Relevant answer
Answer
It all depends on what the endpoint is, the one you wish to analyze or evaluate. Linear regression analysis is nice, if you have a linear process.
  • asked a question related to Images
Question
1 answer
Please ask if you need additional information.
Relevant answer
Answer
It would be useful to know what concentration of DAPI you used and fore how long in order to troubleshot.
Additionally, there are some very bright spots present, and I'm uncertain whether they are cells or debris. If you have exposure set on automatic this could be causing your microscope to adjust gain or exposure time to this very bright sports leaving your nuclei underexposed. Could also be that you have your contrast set in automatic in the software in which you are seeing/exporting the images and again this bright spots are interfering. If the second is true, then you would only need to adjust your contrast before exporting, overexposing those dots but correctly exposing your cells.
Hope this helps!
  • asked a question related to Images
Question
4 answers
The reference in ResearchGate of our paper is wrong (see joined image). This wrong reference is also visible in the Google website.
The correct reference (Doi number 10.3390/land13050592) is in the Google scholar website, hal-cnrs website (hal-04566042, version 1), and in my ORCID site (https://orcid.org/0000-0002-3411-2647)
Please correct this wrong reference in ResearchGate. I tried to correct it yesterday, but it remains wrong
Best regards
Marianne Cohen
Relevant answer
Answer
Could you provide the link you refer to (with the 9 authors)? Since I cannot find it.
What I can find is that (in part?) it has to do with the fact that the paper entitled Resilience of Terraced Landscapes to Human and Natural Impacts: A GIS-Based Reconstruction of Land Use Evolution in a Mediterranean Mountain Valley is missing the proper details (like doi, journal title, issue nr. etc.).
The mixed up with the 9 unrelated authors is most likely due to the fact that your paper is part of a special issue https://www.mdpi.com/journal/land/special_issues/resilience_historical_landscapes (where the second paper is the one with the nine authors).
Best regards.
As indicated by Wolfgang R. Dick claim authorship of the other link (and remove it)
  • asked a question related to Images
Question
3 answers
Which patching method is suitable and practical for training and testing deep learning networks when it comes to patching input data? Is it small patches with overlap, such as 50x50 on individual pixels, or large non-overlapping patches like 256x256?
Relevant answer
Answer
Several image patching strategies are suitable and practical for deep learning models, depending on the specific task, dataset size, computational resources, and other factors that you are handling at a time. These could include Random Patch Extraction, Grid Patch Extraction, Sliding Window, Patch Augmentation, Patch Overlapping, and Center Patching, among others. However, based on your question:
I recommend Small Overlapping Patches (i.e., 50x50 with overlap) because they offer more data diversity, capture finer details and variations in the images, help mitigate the effects of slight translations, and enhance model robustness. Also, I'd like to point out that higher computational cost than non-overlapping patches requires careful management to avoid overfitting, especially with limited datasets.
Large Non-overlapping Patches (like 256x256) reduce the computational cost, making it more efficient for large datasets and simplifying the training process due to fewer samples and reduced augmentation complexity. Not forgetting, it may miss out on fine-grained spatial details present in smaller patches and is more sensitive to image transformations due to larger patch sizes.
I hope it helps,
Regards...
  • asked a question related to Images
Question
3 answers
Please tell the steps to find threshold of NDWI image in Qgis
Relevant answer
  • asked a question related to Images
Question
2 answers
The image attached is a streak plate for DH5a cells on a non-antibiotic plate. I have white colonies visible at the start of my streaking [ I drew an initial deposit]. I also see these white colonies in the transformed plate with carbenicillin [image not attached]. In addition, when non-antibiotic fresh plates were incubated without inoculation, 3-4 colonies were found. Do my incubator needs cleaning?
Relevant answer
Answer
You may look into the article here. Hope it helps
  • asked a question related to Images
Question
5 answers
Hello esteemed researcher, May I direct your attention to the image? I am curious to understand why there is only a minimal drop in pressure once the elution begins.
thank you.
Relevant answer
Answer
Please show the pressure reading on the y-axis. What conditions were used for the purification? Was it a gradient elution?
  • asked a question related to Images
Question
2 answers
The lanes contain the same sample and concentration (15ug). The image contains GluN2A, PSD95, and Beta Actin (Top, middle, bottom respectively). Due to this being the same sample every lane should theoretically be the same but the only one close to being the same is Beta Actin. GluN2A and PSD95 seem to be following the same pattern.
Relevant answer
Answer
Overnight for 16 hours in 4 degrees and saran wrapped. I reproped the blot with GABA as well and those bands came out even.
  • asked a question related to Images
Question
2 answers
In this image, d orbital splitting at fermi energy of (a) Mn atoms in bulk (b) Mn atoms at the interface of MgO (c) Mn atoms above the interface of thin film. CF (SOC) denotes Crystal field (Spin orbit coupling). I'm little confused about the eg and t2g as in this image, there are group of three in (a). Whats the explainantion for this?
Relevant answer
Answer
Dear Prof. Mohit Verma
Let us wait for the answer of an specialist in crystal theory.
In general the crystal field theory assumes that the nature of the ligands and their arrangement around a central ion (with a complex symmetry such as octahedral) reduces the degeneracy of the d orbitals and changes their energy.
Best Regards.
  • asked a question related to Images
Question
1 answer
I hope this message finds you well.
I'm writing to seek your expertise regarding image editing tools that can add scale bars to photos. I'm working on a research project that involves analyzing microscopic images of specimens, and accurately representing the scale is crucial for our analysis.
I've tried searching for online tools that can perform this task, but I'm having difficulty finding reliable and user-friendly options. I'm hoping you might be able to recommend a specific website or software that can effectively edit photos and add scale bars.
Relevant answer
Answer
Dear Nuha,
Thank you for your message! I am sorry, but I don't know any image-editing-tool for micoroscopic images. I am working with art, maybe its more the field of natural science where these tools are being used?
I wish you all the best for your project!
Silke
  • asked a question related to Images
Question
2 answers
Hello, when I use a reflective light microscope (with light entering at approximately 45 degrees), I can see a lot of details and textures. I suspect the image may contain some sub-annual lines. However, when I use a transmissive light microscope or a reflective light microscope (with light entering vertically), I can see the main growth lines more clearly, and many micro-growth lines and textures are not visible under oblique light. I am conducting cross-dating and am unsure which perspective of images to use for cross-dating.
Relevant answer
Answer
Lusha M Tronstad Thank you very much for your help! The current shell samples I have are wrapped in resin, with a thickness of about 2cm. I'm using a reflected light microscope to observe the growth lines in the hinge plate area. Although the growth lines are relatively clear in the eyepiece of the reflected light microscope, they appear blurry and there are some bright spots when photographed with a camera connected to the microscope. I have consulted some literature, and it seems that this may be due to significant light scattering in thick sections. It looks like I need to cut my samples thinner. Thanks again for your help!
  • asked a question related to Images
Question
1 answer
What Canadian university teams are doing medical image processing or gastrointestinal endoscopic image processing based on deep learning technology?
Relevant answer
Answer
Dear Zhang Kaixuan,
I am not expert in the topic you refer to, which I think is interesting.
In my opinion, the best approach for accurate and justified answer is through bibliometrics.
If you search in platforms such as Web of Science or Scopus, you can see what publications exist around your topic, authors, affiliation, countries and other relevant data.
  • asked a question related to Images
Question
5 answers
I propose a discussion on my text "India and Ancient Greece: Similar Allegories, Analogies, and Differences". The text has been published in the volume "Discourses in Greek Studies", edited by Professor Anil Kumar Singh (Indo-Hellenic Research Centre, New Delhi). In our inquiry, we aim to investigate the analogies, correspondences, and similarities between Indian Cultural Heritage and Ancient Greek philosophy. Our study deals with the Kaṭha Upaniṣad and Bṛhadāraṇyaka Upaniṣad as regards Indian Cultural Heritage and with Plato’s Phaedrus as regards Ancient Greek philosophy. We will concentrate our attention on the image of the individual soul as a charioteer leading a chariot with two horses exposed in Plato’s Phaedrus. This image has strong analogies with the image of the individual contained in the Kaṭha Upaniṣad I, 3. Within this part of our analysis, we investigate the figure of the charioteer and the two horses of the chariot. We point out the difference between the souls of the human beings, on the one hand, and the souls of the gods, on the other hand. The allegory of the soul as the chariot in Plato’s Phaedrus proves to be a description of the human condition; the dimension of the earthen existence turns out to be the consequence of the imperfection of the human soul. This imperfection is connected to the absence of an adequate level of knowledge. Knowledge is, within Plato’s image of the chariot, a factor which hinders the fall of the human being in the earthen dimension. Knowledge is necessary to avoid the decadence of the human soul. Furthermore, knowledge is necessary for the soul in order that the soul can return to its original dimension. The earthen dimension is not the original dimension of the human soul; it is not a dimension in which the human soul ought to remain. The earthen dimension is a dimension which should be abandoned. The nostalgia for the authentic dimension of reality is one of the characteristics of the soul enslaved in the earthen dimension. The image of the chariot of Phaedrus has analogies with the Kaṭha Upaniṣad 1.3.3–1.3.9. Through the analysis of the Kaṭha Upaniṣad, we can observe that understanding is necessary for the human being to reach a dimension of reality which is different from the average life dimension; understanding is necessary for the individual to be free from the chain of rebirths. Constitutively, there can be a contrast between elements composing the human being, i.e. between the intellect and the senses. If the senses are not subdued to a discipline, they hinder the journey of the human being to the authentic dimension of reality. Only the development of understanding can enable the human being to train the senses in an adequate way. In this context, too, the idea is present that the human being should reach a different dimension if the human being wishes to be free from the chain of rebirths; the initial condition of the human being is a condition which should be abandoned. The average dimension in which the human being lives is not the authentic dimension of the human being. The image of the chariot introduces us, therefore, to a frame of correspondences between Plato’s Phaedrus and the Kaṭha Upaniṣad which can be listed as follows: - The average life of the human being is not the authentic dimension of the individual. - The human being is, as such, a composed entity. - Any human being has a plurality of factors in himself. - The human being is enslaved in a chain of rebirths. - The enslavement in the average life dimension is not definitive; an alternative dimension can be reached by the human being. - Only through a process of education can the human being reach the correct disposition of the intellect. - Knowledge and understanding are necessary for the human being to be able to lead his life.
Relevant answer
Answer
One may say that it all started with the Gods – most Greek Gods lived on Mount Olympus, while Indian counterparts lived on Mount Kailash. Ambrosia in Greek Mythology and Amrita (Nectar) in Hindu Mythology was a drink of the Gods that bestowed immortality on those who drank it. Titan Helios, the Sun God rode a golden chariot across the sky that was pulled by seven bulls and Hindu deity Surya rode his flaming chariot across the sky that was pulled by seven horses.
Aryabhatta the first great mathematician and astronomer introduced to Greeks, the calculation and rotation of the Earth related to the specific stars as twenty-three hours, fifty-six minutes and 4.1 seconds (actual 23:56:4.091) and the length of the year was 365 days 6 hours, 12 minutes 30 seconds was just 3 minutes 20 seconds out of the length of the year.
The validation of claim that Indo-Greek interaction took place long before the military campaign of Alexander can be verified through ancient Greek and Sanskrit literature. Sanskrit literature suggests these people as formidable warriors and conversant with the knowledge of science that was strange to indigenous people.
Historical narrations of Indo-Greek interaction in ancient times are found in the classical Indian literature of Yuga Purana, Mahabharata and Buddhist literature. Greeks were called ‘Yavanas’ in ancient India.
These are some thoughts on the subject; wishing you success, dear Gianluigi Segalerba !!!
  • asked a question related to Images
Question
1 answer
Nowadays scientific illustration became an integral part of publications. Apart from original research images, graphical abstract is imperative for journal submission. It even receives more attention than the main article. I, as a person with mediocre drawing skills, always fascinated by published graphical abstracts. Recent times, many illustration making softwares came in rescue. Many softwares provide free/ paid services like icon libraries, flow charts, etc., which form the basis for creating illustrations according to one's requirement. Some softwares provide templates, which can be modified by paying subscription. What perplexed me is, Are these templates making us copycats?
For example, a simple google image search resulted in exact matches of nearly more than 50 publications, where a image template was slightly modified. As a person, who used to see the images first, this is really confusing to identify a publication based on its graphical abstract. In essence, these templates will fail to create a hype/curiosity among the audience (as it is initially aimed for) and making all our work similar like "Agent Smith". Is it acceptable?
Relevant answer
Answer
To a large extent...it is a tool for killing human creativity
  • asked a question related to Images
Question
3 answers
I believe that text-to-image and image enhancing tools are fascinating. However, we all could recently see how negatively can be the impact of such tools in hand of unprepared people, like the authors that published the paper " Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway" that was retracted by Frontiers.
While I reckon that, at the moment, the AI tools seem to be limited and often not being scientific accurate, I believe that this can be improved with prompt engineering. Most of users I see using this sort of tools are not educated enough in prompting. However, "AI hallucinations" seem to be more the rule than the exception when real scientific images are needed.
Any comments and ideas on how to improve AI image generation for the scientific context? I would welcome examples of failures and success cases, if any :-)
Relevant answer
Answer
AI tools can also improve the quality of images for researchers who find it hard to translate scientific concepts into visual aids, says Rodriguez. With generative AI, researchers still come up with the high-level idea for the image, but they can use the AI to refine it, he says.
Company Cisco sponsors group articles related to the Artificial Intelligence skill.:
Benefits of AI-generated images Challenges of AI-generated images Best practices for using AI-generated images Examples of using AI-generated images Future directions of using AI-generated images Here’s what else to consider
Regards,
Shafagat
  • asked a question related to Images
Question
5 answers
In my research work, we have included the TEM/EDS image for grade 91 steel. But the reviewer asked," You still need to provide analytical TEM/EDX elemental mapping". How to answer this question?
Relevant answer
Answer
What is "TEM/EDS image"? EDS mapping is the same thing as EDS image.
  • asked a question related to Images
Question
4 answers
Greetings to all!!
DNA is isolated from infected cotton leaf.
The image is attached. It looks kind of shearing.
What are possibilities to use it for PCR?
Relevant answer
Answer
It does look like partially degraded dna but it covers a wide range of large sizes so as PCR only amplifies short dna templates it should amplify ver well using these dna samples because the degradation is random and many larger fragments will contain your dna template sequence
  • asked a question related to Images
Question
3 answers
Hello guys
I want to employ FMRI for conducting research.
At first step, I want to know FMRI data is an image like MRI.
Or I should behave with FMRI like time-series when it comes to analyzing data
thank you
Relevant answer
Answer
MRI datasets typically result in high-resolution three-dimensional images representing anatomical structures. These images are often stored in formats such as DICOM (Digital Imaging and Communications in Medicine) or NIfTI (Neuroimaging Informatics Technology Initiative). fMRI datasets produce time-series data representing changes in brain activity over time. These data are often stored in formats compatible with neuroimaging software packages, such as NIfTI, Analyze, or MINC (Medical Imaging NetCDF). fMRI data can be conceptualized and analyzed both as images and time-series. The choice of representation depends on the specific research question and analysis techniques being employed. For many analyses, researchers will use both approaches, leveraging the spatial information provided by the image-like representation and the temporal dynamics captured in the time-series data.
  • asked a question related to Images
Question
1 answer
For Raman/PL peak position mapping corresponding to a particular peak via wire software we need to do the curve fitting first and after the fitting we do the mapping. For my case after cureve fitting in the fitted PL/Raman spectra i am getting unwanted peaks, means where there should not be any peak after fitting there it arises. For that case the mapping image is not perfect. Can anyone please give his/her suggesstion how to remove this unwanted peak?
Relevant answer
Answer
Do you have an example spectrum? Usually you can restrict your fitting range by the parameter settings. Additionally, what are these unwanted peaks? Cosmic rays?
  • asked a question related to Images
Question
1 answer
Anyone here who wants to research and work with me in image processing and machine learning?
Relevant answer
Answer
How is that please?
  • asked a question related to Images
Question
3 answers
I have been trying to run Autodock Vina for my undergraduate thesis for many months have have been getting the following error. Does anyone have any ideas? I have attached an image but if that cannot be opened here is there error message through Cygwin64:
Reading input...
Parse error on line 1 in file "2jgd_DOCK.pdbqt": Unknown or inappropriate tag
mv: cannot stat 'ligand*.pdbqt': No such file or directory.
Please help!
Relevant answer
Answer
The error message "Parse error on line 1 in file '2jgd_DOCK.pdbqt" indicates an unknown or inappropriate tag. Please verify that the '2jgd_DOCK.pdbqt' file is formatted correctly to resolve this issue. You can use a tool like Open Babel to convert your files to the proper format or manually compare the errors in the Autodock Vina documentation.
To resolve the error message "mv: cannot stat 'ligand*.pdbqt': No such file or directory," please confirm that the ligand file ('ligand.pdbqt') is located in the directory where you are executing the command. If the file is in a different directory, move it to the right place or specify the correct path in your command.
Happy Docking!!
  • asked a question related to Images
Question
5 answers
Line mappping
Relevant answer
Answer
Line scan could deliver very desirable information about distribution of elements. But if do not need it, then line scan is not important, not improtant at all.
  • asked a question related to Images
Question
2 answers
What is the popular facial image dataset to detect Austisms, and what are the sources?
Relevant answer
Answer
YTUIA - research paper ID -https://www.mdpi.com/2075-4418/14/6/629
  • asked a question related to Images
Question
1 answer
How do I prepare the sample for an optical microscope in order to calculate droplet size in an emulsion from the image and get a uniform droplet size distribution image?
Relevant answer
Answer
Hi Sruthi!
Option 1:
I recommend utilizing SEM analysis to capture images of the droplets, coupled with ImageJ software or the SEM's built-in tools for precise droplet size measurements.
For analyzing droplet distribution, consider employing Dynamic Light Scattering (DLS) analysis. DLS provides a bell curve representation and detailed insights into the size distribution within the sample.
Option 2:
If you want to do it in microscope, keep this as a guide and tweak the procedure
1. Dilute the emulsion if necessary for better droplet separation.
2. Place a small amount of the diluted emulsion on a clean microscope slide.
3. Add a cover slip to flatten and immobilize the droplets, avoiding air bubbles.
4. Seal the edges of the cover slip if needed.
5. Set up the microscope with appropriate magnification and illumination.
6. Capture images of the emulsion droplets.
7. Analyze the images using software or manual methods to measure droplet sizes.
8. Generate a droplet size distribution plot to visualize the uniformity of droplet sizes within the emulsion.
To obtain droplet distribution information from droplet size data:
1. Measure individual droplet sizes.
2. Group sizes into intervals (bins).
3. Count droplets within each interval.
4. Calculate relative frequencies.
5. Plot a histogram or bar chart of droplet sizes.
6. Analyze the distribution shape.
7. Calculate descriptive statistics.
8. Draw conclusions about the droplet size distribution in your sample.
The size distribution of particles in a sample, such as droplets in an emulsion, can often be represented by a probability distribution function.
formula name :log-normal probability density function that relates droplet size, mean of the natural logarithm of the droplet sizes (log-mean), standard deviation of the natural logarithm of the droplet sizes (log-standard deviation).
This formula describes the probability of finding a droplet with a particular size x in the sample. By fitting experimental data to this distribution, you can estimate the parameters μ and σ and characterize the size distribution of the droplets in your emulsion.
I hope it helps
  • asked a question related to Images
Question
3 answers
I am working on the design of an image encryption algorithm. The algorithm has a sound score of entropy, low correlation, resistance to anti-cropping attacks, and resistance to salt and pepper noise. The only issue with the algorithm is its resistance to counter gaussian noise.
Relevant answer
Answer
IYH Dear Rashad Ali
Generally, no. Double Random Phase Encoding is generally considered an irreversible image encryption technique due to its underlying mathematical nature. Why? The main reason behind this lies in the convolution process applied during encoding, which blends the original img with a pseudorandom reference function generated via two separate random phases. Upon decryption, the inverse operation requires precise knowledge of both random phases employed initially—otherwise, perfect recovery of the original image becomes impossible.
Certain variations of the DRPE algo are explored for potentially reversible scenarios. Extensions incorporate auxiliary channels or side info alongside the encrypted imgs themselves, enabling partial restoration even when facing missing or corrupted phase terms.
  • asked a question related to Images
Question
6 answers
I noticed a meta-analysis that included a forest plot with a line of no effect equal to 0.62, as indicated in the attached image. Could it be?
Relevant answer
Answer
In a forest plot, the "line of no effect" typically represents a null effect or a reference point against which the effect sizes of different studies are compared. This line is often drawn at a value of 0 on the x-axis, indicating no effect or no difference between groups or conditions being compared.
In some cases, depending on the context of the data being presented in the forest plot, the line of no effect may be positioned at a value other than 0 or 1. This could occur when:
  1. Different Baseline Values: If the outcome being measured has a baseline value that is not zero, the line of no effect may be positioned at the baseline value rather than zero. For example, if the baseline value for a particular outcome is 50, the line of no effect might be drawn at 50 instead of 0.
  2. Alternative Reference Point: In certain analyses or comparisons, researchers may choose a different reference point as the line of no effect based on theoretical or practical considerations. For instance, if the effect sizes are ratios or percentages, the line of no effect might be set at 1 instead of 0.
  3. Non-Numerical Variables: In some cases, forest plots may display categorical variables or non-numeric data where the concept of a "line of no effect" may not be applicable in the same way as with continuous numerical data.
Overall, while the default position for the line of no effect in a forest plot is typically at 0 on the x-axis, it can be set to other values depending on the specific context and nature of the data being presented.
  • asked a question related to Images
Question
3 answers
Given a feature vector is it possible to reconstruct the original face image? I read about the reconstruction of MNIST, but what about face images?
Relevant answer
Answer
Yes, if you create a decoder and a feature vector will contain the proper information.
In PCA, it was possible.
  • asked a question related to Images
Question
4 answers
Agri-SAR image database?
Relevant answer
Answer
Of cours, yeas, any one can suggest me the way of getting Agricultural SAR images? or any database available
  • asked a question related to Images
Question
2 answers
I have several cell in one picture, all of them contains vesicles. The cells labelled with red marker. the vesicles labelled with green marker. I would like to know each cell how many vesicles located inside. I would like to count all cells which are located in my image at once. How it is possible to do it?
Relevant answer
Answer
Depending on which tool you use, there are image processing packs that offer you several possibilities as far as image segmentation goes. MATLAB, C++, Python and other environments/languages allow you to customize your algorithm/program.
You will also need to work with pattern recognition techniques for clustering and classification of your subimages.
  • asked a question related to Images
Question
1 answer
Are synthetic thermal images useful in bio-medical image processing for diagnostic purposes? Please share some resources of such data sets.
Relevant answer
Answer
Well, we normally don't use thermal images to reflect the inner information and distribution(we should know the illness ROI), cuz it puts more focus on the structural information. That means, this kind of image is more suitable for constructions and some occasions that needs only boundary and structural information.
  • asked a question related to Images
Question
2 answers
I need something like the image given below however with greater number of words and their associated works
Relevant answer
Answer
I understand that you would like to visualize the relationships between a set of words, such as the example you provided with greater number of words and their associations.
To create a graph visualization of the relationships between words based on their word vectors, you can use the following steps:
Generate the word vectors using a pre-trained Word2Vec model, such as the one provided by Google News.
Calculate the similarity between each pair of word vectors using the cosine similarity measure.
Select the top-K similarities for each word and construct a graph where each word is a node, and the edges represent the similarity relationships.
Use a tool like NetworkX to visualize the graph using a force-directed layout.
Here's an example Python code that demonstrates this approach:
python code:
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import nltk
# Load the pre-trained Word2Vec model
model = gensim.models.KeyedVectors.load_word2vec_format("path/to/GoogleNews-vectors-negative300.bin.gz", binary=True)
# List of words to find similar words
words = ['computer', 'phone', 'table', 'alice', 'wonderland', 'king', 'queen', 'uncle', 'aunt', 'woman', 'man', 'researchgate', 'profile']
# Get the word vectors
vectors = [model.wv[word] for word in words]
# Compute the pairwise cosine similarity between the word vectors
pairwise_similarity = [1 - model.wv.cosine_similarities(vectors[i], vectors[j]) for i in range(len(words)) for j in range(i, len(words))]
# Find the top-K similar words for each word
K = 5 # number of top similar words to find
most_similar = [model.wv.most_similar(words[i], topn=K) for i in range(len(words))]
# Construct the graph
G = nx.Graph()
G.add_nodes_from(words)
for i
  • asked a question related to Images
Question
6 answers
In pre-processing, when transforming hyperspectral data to a Minimum Noise Function(MNF) data, the dimensionality will reduce and at the same time the wavelength information will loose. B'z MNF is one dimensionality reduction technique and will give the best bands only. Then how we will compare the MNF spectral curve with USGS spectral library using ENVI software. Any Tutorial video available?
Relevant answer
Answer
To find noise-free bands or reduce the band numbers, we must select the 'Number of Components' field in the 'Inverse MNF Rotation' operation. If not, all bands will be used.
This option is not visible to me in ENVI 5.6. Any ideas on this?
  • asked a question related to Images
Question
4 answers
I have applied the SHA-256 hash function to the plain image to generate the initial values of the chaotic map in the image encryption algorithm and include it in key space analysis. So, when the receiver wants to decrypt the received image from the transmission, he needs the key, which is the SHA-256 output. Now, someone has asked me, "SHA-256 hash function is applied, how to transmit the hash values but with no extra transmission"?
Anyone can give me some hints on how to transmit the SHA-256 function output without no extra transmission?
Relevant answer
Answer
Transmitting the SHA-256 function output without any additional transmission involves sending the result along with the data it's derived from in a single transmission. This can be achieved by including the SHA-256 hash along with the original data. Here's how you can do it:
  1. Calculate the SHA-256 Hash: First, compute the SHA-256 hash of the data you want to transmit. This generates a fixed-size hash value unique to the input data.
  2. Transmit the Data and Hash Together: When transmitting the data, append or prepend the SHA-256 hash to it. This ensures that both the original data and its corresponding hash are sent in a single transmission.
  3. Recipient Verification: Upon receiving the data, the recipient can separate the hash from the data. They can then independently compute the SHA-256 hash of the received data and compare it with the transmitted hash. If the computed hash matches the transmitted hash, it indicates that the data has not been altered during transmission.
Here's a simple example in Python:
pythonCopy codeimport hashlib def transmit_data_with_hash(data): # Calculate the SHA-256 hash of the data hash_value = hashlib.sha256(data.encode()).hexdigest() # Transmit data and hash together transmitted_data = data + hash_value # Return the transmitted data return transmitted_data def verify_transmitted_data(transmitted_data): # Separate the data and hash data = transmitted_data[:-64] # Assuming SHA-256 hash length is 64 characters transmitted_hash = transmitted_data[-64:] # Recompute the hash of the received data computed_hash = hashlib.sha256(data.encode()).hexdigest() # Compare computed hash with transmitted hash if computed_hash == transmitted_hash: return "Data integrity verified. No alterations detected." else: return "Data integrity verification failed. Possible alterations detected." # Example usage original_data = "Hello, world!" transmitted_data = transmit_data_with_hash(original_data) print("Transmitted data with hash:", transmitted_data) # Simulating transmission... # Upon receiving transmitted_data, recipient verifies it verification_result = verify_transmitted_data(transmitted_data) print(verification_result)
This code demonstrates how to transmit data along with its SHA-256 hash and verify data integrity upon reception. Remember to adjust the code as needed for your specific use case and programming environment.
  • asked a question related to Images
Question
1 answer
Dear all,
I treated wild-type cells and knockout cells with two drugs X and Y for 24 hours. Then I measured the Gene A expression. Drug Y significantly increased Gene A expression around 500-fold only in WT cells. Also, Gene A expression is really low (~10%) in KO cells compared to WT cells. Please see the image.
My very basic understanding of statistical methods for this is a two-way ANOVA since I have two independent variables, the cell type and drug treatments. However, the issue is that these huge changes make the data fail to pass Levene's test for homogeneity as well as Shapiro's test for normality. A two-way ANOVA returns weird results and seems not suitable.
What overall tests should I perform, and what post hoc analysis should I follow?
Thank you in advance!
Relevant answer
Answer
  • asked a question related to Images
Question
3 answers
I'm seeking advice on how to effectively mosaic four Landsat tiles for analysis. Three of these tiles are from Landsat 5, with pixel depth at 16-bit unsigned, while the remaining one is from Landsat 7, with 8-bit unsigned pixel depth. Is it appropriate to directly mosaic these tiles, considering the potential discrepancy in pixel values? Should I convert the Landsat 7 tile from 8-bit to 16-bit before mosaic? If conversion is necessary, how can I accomplish it using raster calculator tools?
Thank you.
Kind regards,
Sohel
Relevant answer
Answer
Mosaicking Landsat tiles from different satellites (5 vs. 7) and with different pixel depth (16-bit unsigned vs. 8-bit unsigned) requires careful consideration to ensure the resulting composite image maintains spatial and spectral integrity. Therefore, it is very important to bring them in same pixel depths before mosaicking.
Here's I how I would approach this problem:
Landsat 5 images have a pixel depth of 16 bits (unsigned--non-negative values), providing a range of 0-65535 possible values for each pixel. In contrast, Landsat 7 images with an 8-bit depth have a value range of 0-255. This discrepancy can lead to differences in the representation of brightness and spectral information.
I can linearly scale the 8-bit values to 16-bit values. This means mapping the 0-255 range of the 8-bit image to the 0-65535 range of a 16-bit image.
In ArcGIS: Use the Raster Calculator in the Map Algebra toolset.
- Example syntax : ("Landsat7_BandX" / 255.0) * 65535
Here, replace `"Landsat7_BandX"` with the actual name/path of my Landsat 7 raster band.
I would confirm that all tiles are georeferenced to the same coordinate system to ensure spatial alignment before mosaicking. I would also examine the mosaic for any visible seams or discrepancies in the areas where tiles overlap.
Hope this helps, bro.
  • asked a question related to Images
Question
2 answers
Hello,
We used Zeiss LSM880 & Zen software for tiling images, scanning approximately 32 areas in one tile image. How can I save these 32 images individually, not as a tile image? Attached is an example for the DAPI channel.
Thank you.
Relevant answer
Answer
I would recommend you use ImageJ or FiJi to extract each vignette and save them. This will need some programming but it's fairly straightforward. I did it some years ago : extracted each single pictures from the montage picture and produced a stack from them.
Try the attached ImageJ macro and adapt it if needs be
  • asked a question related to Images
Question
2 answers
This image was taken for my research project. It will be a great help if someone helps me with its identification
Thank you!
Relevant answer
Answer
Hello.
I think it is Tetraedron trigonum o T. constrictum. Have a look to https://images.app.goo.gl/aYmbt3mExtxfFQWg8
Pseudostaurastrum is another possibility.
  • asked a question related to Images
Question
7 answers
I have a transparent pipe which takes water from a tank.
The flow happens thru a pump. Pump takes water from tank and makes it flow thru the pipe.
The water stored in tank is open to atmosphere.
When I see the flow through the transparent pipe, i observe very huge bubbles. I have attached an image for your review.
I am interested to learn: from where is this air coming in the flow?
Relevant answer
Answer
As James Garry suggested, a dissolved air cannot cause a sustainable train of bubbles as suggested in the photo. The bubbles seem to be consistent in your experiment and most probably there is a consistent (although small) source of air that is introduced in the flow. Try what James has suggested and close every possible leak.
  • asked a question related to Images
Question
2 answers
I've always used abd as load control for western blot normalization, and use Ponceau S stain as a transference control (along with keeping an image - a picture taken by my cellphone - for supplementary paper material submission). My question is:
Can I use this picture to quantify total protein? Or there is a specific way/equipment to image the membrane stained with Ponceau S to quantify total protein?
Relevant answer
Answer
Depending on the quality of the image, it may be possible to use your Ponceau image along with our Phoretix 1D software - https://totallab.com/products/phoretix-1d/ to create a multiplex image, then use the Ponceau channel for total protein normalisation.
I've done a tutorial video on YouTube on how to normalise Western Blots within our software which you can find here - https://youtu.be/H05aRpw8Wdg?si=LJNbvUcoakrFBmHw
Hope this helps.
  • asked a question related to Images
Question
5 answers
Hello everyone, I want to create the geometry of a wind turbine blade using a CAD system, in fact to determine the length of the profile along the blade, I use the chord of the airfoil , but what about the root of the blade where there is a circular foil ? How to determine the radius of the those circular airfoil that form the root of any horizontal axis wind turbine?
below you will find an image of the wind turbine ple where the circular profiles are clear.
Relevant answer
Answer
I guess the blade radius at the root comes from a compromise between its loading capacity, the hub size, the pitch mechanism, and the overall costs of the three components (blade, hub, and pitch mechanism).
If you are not analyzing these things, I suggest looking at existing blades and basing your decision on that.
For structural reasons, there should also be a smooth transition from the circular root section to the airfoil-shaped sections. But in your case, if you are only interested in CFD analyses, the inboard blade sections typically contribute little to the overall forces and moments, so I wouldn't be too worried about the inboard shape :) In fact, if you are not including the hub in your simulations, the flow over the inboard sections will probably be unrealistic due to the root vortex...
Hope this helps.
BR,
Ricardo
  • asked a question related to Images
Question
5 answers
Can anybody provide me with Matlab code for a chaos based image encryption algorithms?
I am doing analysis of chaos based image encryption schemes for a project and want to analyze the cryptographic security of different image encryption schemes. It would be helpful for me if I can get Matlab codes for some of the popular schemes.
Relevant answer
Answer
We have uploaded several codes in Matlab central, feel free to browse through them, there are a couple on chaotic image encryption, and many more on chaotic random bit generators https://uk.mathworks.com/matlabcentral/fileexchange/?q=profileid:28825692
  • asked a question related to Images
Question
6 answers
I obtained .txrm, .exm file formats from micro ct scan in xradia context. I want to do the post processing in dragonfly but when I import the files, it asks about pixels and image spacing. Can anyone suggest how can I know these attributes to fill in the window. I am new to dragonfly, I have seen the lessons but the raw files used in them had some attributes. I am lost regarding which file format i should load as well. Any help is appreciated.
Relevant answer
Answer
A bit late, but Aastha Aastha did you change the filter for the file extension (see attached image)? For me it defaults to a specific file type, and so I have to change it to "All Files (*.*)" to get it to show everything?
  • asked a question related to Images
Question
1 answer
Hello everyone !
I am a master student with a specialization in export management and I really need your help regarding the finalization of my research question.
I read a couple of articles and found the interesting subject of country image (which can be dangerous because I might slide into a marketing subject)
But, I read some articles an find it very interesting to talk about the importer / exporter point of view and how can country image create opportunities and at the same time entry barrier for the exporter and what different variable to take into account ?
I really need an external point of view as I believe my question is quite complete but i do not know if it is relevant for an export manager ?
Thank you for your feedbacks,
Have a very nice day !
Best,
Jade
Relevant answer
Answer
Jade Devoucoux that is too unclear, must have RQ from quantities that you can measure
  • asked a question related to Images
Question
2 answers
Atmospheric imaging especially ionospheric irregularity imaging with radar interferometry.
Relevant answer
Answer
Aynı tutarlılık ve fazda o günün atmosfer koşullarında oluşturmak mümkün. Işık spektrumları sayesinde oluşturulan anlık görüntüler o zamanın atmosfer koşullarında elde edilen şekil veya boyutlardır.
  • asked a question related to Images
Question
3 answers
Asking for my practical work.
A person published an article, but the Journal eventually identified the manipulation of images in the published article. What are the further actions?
Could you describe in some steps shortly.
Relevant answer
Answer
AI tools combat image manipulation
"Scientific publishers have started to use AI tools such as ImageTwin, ImaCheck and Proofig to help them to detect questionable images. The tools make it faster and easier to find rotated, stretched, cropped, spliced or duplicated images. But they are less adept at spotting more complex manipulations or AI-generated fakery..."
  • asked a question related to Images
Question
1 answer
I have taken electron diffraction images of my sample (nanowire) as well as of a standard (Al) at the same condition. The problem is none of these images have any scale bar (reciprocal). How can I put the reciprocal scalebar in these images? I want to analyze the Al standard and then using the information of the standard I would like to measure d values of the sample.
Relevant answer
Answer
Hello,
your TIF files contain some information in tag 270 (see below).
(To read the tags you can use the ImageJ plugin here:
It seems the calibration data are identical for both images:
XpixCal=233.059 YpixCal=233.059 Unit=A (The unit should read 1/A).
Using this information you can try to figure out the calibration in both images and see if the results for Al fit the published patterns for Al.
AlStandard_005_D.txt
TagNo (Tag Name) (Count TYPE ) Value
======================================================================
270 (ImageDescription) ( 108 ASCII ) I.M.A.G.E. 10/28/10 9:31 0.1 51 60 HC-DIFF 12.8 -15.2 -0. XpixCal=233.059 YpixCal=233.059 Unit=A ##fv3
Nanowire.txt
TagNo (Tag Name) (Count TYPE ) Value
======================================================================
270 (ImageDescription) ( 112 ASCII ) I.M.A.G.E. 10/27/10 11:55 0.1 51 60 HC-DIFF 748.7 333.4 -0. XpixCal=233.059 YpixCal=233.059 Unit=A ##fv3
  • asked a question related to Images
Question
2 answers
Dear scientists,
I have encountered a problem where my bacteria (Staphylococcus Epidermidis) grow perfectly fine in liquid media (Tryptic soy broth) but not on agar plates (freshly prepared TSB plates). For the TSB plates that did grow bacteria, often only one small corner grew but not other parts although I streaked all over the plate using a glycerol stock (image1). I also plated the diluted bacteria solution from a liquid culture (OD was about 0.06) on the plates yet nothing grew (image 2). When I used the bacteria that did grow on the plates to streak another agar plate (TSB), they did grow but the middle of the plate didn’t grow anything (image 5). However, when I used a 6 month old LB agar plate for streaking, the bacteria grew perfectly fine (image 3). In addition, I also used an E. Coli liquid culture to streak a TSB plate, and they grew perfectly fine (image 4). I don't understand why my bacteria have problem growing on TSB agar plates but can grow in liquid TSB media. TSB media is recommended by ATCC for the growth of S. Epidermidis. This problem has halted all of my CFU experiments as the bacteria don't grow on agar plates. Would you be able to give me any suggestions why this is happening? Your time and help are strongly appreciated!
Relevant answer
Answer
Thank you so much for answering my question! The recipe for TSB media is 15 g/ 500 ml of water (suggested by sigma), and that for TSB plates is 6 g of TSB + 3 g of agarose in 200 ml of water. The agar concentration is 1.5%, and the TSB concentration is the same for TSB broth and plates. The only thing I am worried about when making the plates is that I didn't put the agar flask into a 56 oC water bath after autoclaving to lower the temperature. But I will definitely make a new batch of TSB plates and try again!
  • asked a question related to Images
Question
3 answers
I am going to utilize the Image J software to count a mixture of cells at the same time. Does anyone know how I can count them using image J? I know how to count just one type of cell but do not know if I have a mixture of cells with different fluorescent proteins.
Relevant answer
Answer
It would help if you post a couple of images.
  • asked a question related to Images
Question
5 answers
I have purified overexpressed protein from BL21 (DE3) cells using Ni-NTA column. When we run purified protein through native PAGE and SDS PAGE both which showed different result. In SDS page showed only one band of purified protein whereas two band in native PAGE. I have proceed whole experiment three times and found same results. What is the possibility to find two band in native page whereas it one in sds page. I have attached native PAGE image.
Relevant answer
Answer
Shweta Rai Hii, shweta. I also want to run my purified protein (by Ni-NTA coulmn) in Native Page. can u share the details. I ran 6% resolving gel at 40V at 4 degree. bu could not see any band. Actually size of my protein is 50Kd but on sds-page I could see band around 75 KD (repeated thrice). so I want to run the purified protein in native page.
  • asked a question related to Images
Question
2 answers
Hello, the image below is my cyclic voltammogram for a redox reaction with a metal and a ligand. Can you help me explain why I see two reductions and two oxidation cycles, respectively? What can it imply?
Relevant answer
Answer
Based on the cyclic voltammogram you shared, the observation of two reduction and two oxidation peaks suggests that there are likely two different redox processes occurring for the metal-ligand system under study. Some possibilities include:
  1. Step-wise reduction/oxidation - The metal ion center undergoes reduction by accepting electrons step-wise at two different potentials, showing two cathodic peaks. Correspondingly, there are two oxidation states generated that can each get oxidized back at separate anodic peaks.
  2. Different coordination environments - If the ligand can bind the metal center in multiple modes (e.g. uni- vs bidentate), each metal-ligand coordination geometry may exhibit its own reduction and oxidation characteristics leading to two sets of peaks.
  3. Dinuclear/polymeric structure - If the metal centers and ligands are assembling into dinuclear or oligonuclear complex structures, each metal site within that assembly may display quasi-reversible reduction and oxidation responses.
To distinguish these and obtain more definitive structural information, you could systematically alter experimental parameters like metal ion concentration, ligand ratio, pH levels, scan rates etc. and monitor the voltammetric response. Complementary structural characterization techniques (NMR, ESI-MS) would also help elucidate the origin of the dual redox behavior you observe.
  • asked a question related to Images
Question
3 answers
Hello everyone,
I recently encountered a noise problem in my patch clamp experiments. I happened to observe a small current and a noise when the electrode holder was in the open circuit. The observed current was 20 pA in the open circuit and 8 pA when the reference electrode was immersed in bath. The observed noise was at 50 Hz in both instances. (Image 1: open circuit without bath) (Image 4; when both the reference and recording electrodes are immersed in bath)
All the devices are grounded in a common earth/ground line, in which all the equipment and cage around the system are grounded to the grounding bus, which is then connected to the Axopatch 200B signal ground.
My RMS (pA) values are around 9 during the membrane test, and they are between 2-6 pA during the episodic scope run. (Image 2: Episodic scope)
I use a microperfusion system without a vacuum suction unit to aspirate the perfusion out of the bath. And I use the help of gravitation to make the solutions flow.
I’ve tried grounding the microscope to the rear gold connector of the head stage, but unfortunately, it wasn’t helpful.
I checked to locate the source of noise by turning it off and unplugging them one by one, but the RMS value remained around 9 all the time. I’ve covered the light source on the roof with copper mesh that’s used to make Faraday cages. I’ve attached two more images for your kind reference of noise when only the reference electrode is in the bath (Image 3; Reference electrode only in bath) and when both the reference and recording electrodes are in the bath (Image 4; when both Reference and recording electrodes are immersed in bath)
.
I’m currently focused on endogenous currents of ligand-gated and voltage-gated ion channels using a whole-cell voltage clamp configuration.
Please help me figure out the problem, and I’m grateful for your kind responses.
Thank you very much.
Nirujan
Relevant answer
Answer
In addition, when I unplug cables from the extension cords and fix them back or plug the extension cord into different wall sockets, the white noise (the spikes) disappears and reappears in an hour or two.
  • asked a question related to Images
Question
1 answer
I have the model as attached. I want to have a simulation box containing water molecules to mimick fluid in the simulation. The other image shows TIP3P water addedd to the system. I am new in MD simulation. Any idea about this is appreciated. Thanks
Relevant answer
Answer
This is a cutting processing modelled by LAMMPS.
In LAMMPS, you can use MOLECULE keyword to define water molecular and then use create_atoms to create given number of water molecular in the system.
Before using MOLECULE keyword, you need define a water moelcular TEMPLATE.
  • asked a question related to Images
Question
4 answers
Image and text encryption
Relevant answer
Answer
Dear Dr. Susan John ,
You may want to review following useful information below:
Here are some general trends and considerations that might influence advancements in this area:
  1. Post-Quantum Cryptography (PQC): With the rise of quantum computing, there's a growing interest in post-quantum cryptography, which includes ECC. Researchers are exploring new cryptographic algorithms that can withstand attacks from quantum computers. This might lead to the development of new ECC-based algorithms or adaptations of existing ones.
  2. Efficiency Improvements: There is a continuous effort to enhance the efficiency of ECC algorithms, making them more suitable for resource-constrained environments, such as those found in IoT devices or mobile applications. Optimizing ECC for performance can be particularly relevant for image encryption, where computational efficiency is crucial.
  3. Pairing-Based Cryptography: Pairing-based cryptography, a type of cryptography that relies on mathematical pairings, has shown promise in various applications, including image encryption. Advances in pairing-based elliptic curve cryptography (PBECC) may have implications for image encryption techniques.
  4. Homomorphic Encryption: While not exclusive to ECC, homomorphic encryption is a field of cryptography that allows computations to be performed on encrypted data without decryption. This can be relevant in scenarios where image data needs to be processed without exposing its raw form. Advances in homomorphic encryption may complement ECC-based image encryption techniques.
  5. Standardization Efforts: The standardization of cryptographic algorithms is an ongoing process. Organizations like the National Institute of Standards and Technology (NIST) play a crucial role in this regard. Keep an eye on NIST's activities and any potential standardization of ECC algorithms for image encryption.
  • asked a question related to Images
Question
4 answers
What is quantitative image analysis?
Relevant answer
Answer
Quantitative image analysis is the act of extracting objective, numerical insights from images.
  • asked a question related to Images
Question
4 answers
How can the future of the world or the universe be imagined in the image of quantum physics?
In the science of quantum physics, as Einstein imagined and depicted in his general and special relativity theories, it is being realized in a way. And Hubble and Einstein worked hard in the direction of quantum physics. Today, where do we stand in this world and the universe, and specifically where are we now? Where exactly are we in the world and how many years have passed since the Big Bang? And in which world are we located and how far is the end of the world? Does anyone imagine how small we humans are compared to the vastness of the world and how the science of astronomy and cosmology answers the question of where we are? Where is the end of the world? And how far does dark matter exist in this world? ? How far have the stars and galaxies expanded? Will there be planets where intelligent beings like us humans exist? With the help of quantum physics and astronomy, can we help the future of man to better understand the world and our environment and the conditions of climate change, and to understand the opinions of Yugoslavian Melankevich about climate change on earth? Is it the age of drought or the age of frost and cold in the future?
Relevant answer
Answer
Greetings and courtesy
Thank you very much for your complete and comprehensive answer. Thank you Abbas
  • asked a question related to Images
Question
3 answers
Markov-CA in IDRISI happans second input image not found
Relevant answer
Answer
Hello Ma'am, I am facing the same problem can you please help me how did you find the solution ?
  • asked a question related to Images
Question
4 answers
I have this image from surface potential how I can get CPD value from this data?
Relevant answer
Answer
Aparajita Kadam yep. It is CPD between the sample and your probe. If you want to know the work function of your sample, use a reference sample with a well-defined work function.
𝜑sample = 𝜑𝑡𝑖𝑝 − 𝑒𝑉𝐶𝑃𝐷
𝜑sample = 𝜑𝑡𝑖𝑝 − 𝑒𝑉𝐶𝑃𝐷_sample
𝜑reference = 𝜑𝑡𝑖𝑝 − 𝑒𝑉𝐶𝑃𝐷_reference
𝜑sample = 𝜑reference + 𝑒(𝑉𝐶𝑃𝐷_reference − 𝑉𝐶𝑃𝐷_sample)
This approach is detailed in the following work:
  • asked a question related to Images
Question
2 answers
Hi everyone,
My research is exploring how social comparison (comparing yourself to others) influences body image and eating concerns. I am looking for females who are aged 18+ and from the UK. If you are interested and willing to be interviewed, please email [email protected] and we can arrange a time slot. The duration of the interview will be around 30 minutes. However, please do not request to participate if you have an eating disorder.
Thank you.
Relevant answer
Answer
Thats great
  • asked a question related to Images
Question
3 answers
Are you a dedicated researcher passionate about advancing the field of Radiomics in Nuclear Medicine? An exciting co-authorship opportunity awaits individuals with a keen interest in contributing to cutting-edge research. We are embarking on a research project that delves into the intricate realm of Radiomics within the domain of Nuclear Medicine. This initiative aims to investigate cardiac radiomics and establish a guidline to use them in both; research and clinics. As we navigate the complexities of this field, we invite talented researchers to join us in making meaningful contributions. As a co-author, you will play a pivotal role in image processing/analysis and AI operations. This is a collaborative effort where your expertise will contribute to the advancement of knowledge and innovation in Radiomics within Nuclear Medicine.
Relevant answer
Answer
Great! Maybe we can have a chat.
  • asked a question related to Images
Question
5 answers
which best algorithm can be used for solar panel defect findings, image processing and forecasting using AI in solar power plants ?
please suggest Top algorithm and models in solar PV plant used presently
Relevant answer
Answer
I also think that Neura network is suitable and best choice
  • asked a question related to Images
Question
3 answers
I am writing an article for Elsevier in two-column format Can one help me find the template? I have the code, but I can't set up the title page and can't add the logos in the proper positions.
Thanks
Relevant answer
Answer
  • asked a question related to Images
Question
1 answer
Express the significance of image processing in remote sensing for agriculture. Discuss the various techniques used in image enhancement, classification, and interpretation for accurate crop monitoring.
Relevant answer
Answer
Please see the paper: for image enhancement techniques.
The merits and demerits of various image enhancement techniques are discussed in detail in the above article.
  • asked a question related to Images
Question
4 answers
Hi, our team is currently exploring research opportunities regarding visualization tools for computational materials design of inorganic crystal structures. I am writing to ask what features people would like to have for such an interface?
I image such an interface to be something like we start with a chemical composition and we will see the stable phases and corresponding crystal structures. It would also be good if we can know the predicted properties of interest as well as how the structures can be related to the properties. Any thoughts/comments are welcome and appreciated. Thank you.
Relevant answer
Answer
Cheng Zeng I can share my thoughts and experiences to help you and your team in this journey. To make it accessible and engaging, let's break down the key aspects of your proposed interface and discuss them one by one.
  1. Chemical Composition Entry: Imagine this as the starting point. It's like entering the ingredients for a recipe. Users should be able to input the elements they want to work with, just like writing down the recipe for a dish.
  2. Stable Phases Visualization: Once the composition is entered, the interface should display the possible stable phases. Think of this as different recipes that can be created with the same ingredients. Users can visualize the various forms that these materials can take.
  3. Crystal Structures Exploration: Here, users should be able to delve deeper into each stable phase. It's like exploring the details of a dish in a recipe book. Users can rotate, zoom in, and inspect the crystal structures to understand their geometry.
  4. Predicted Properties: Just as a recipe book may provide nutritional information, your interface should offer predicted properties. Users should be able to see the characteristics and behavior of these materials. This is like knowing how a dish tastes and its nutritional value.
  5. Relationship Between Structure and Properties: This is the magic part. Users should be able to link the crystal structures to the properties. Think of it as understanding why certain ingredients and cooking methods result in a particular taste. In your case, why a specific crystal structure exhibits certain properties.
  • asked a question related to Images
Question
1 answer
I'm a secondary school teacher who is starting a new research in the field of Visual Thinking Strategies (VTS) in Ibi, Spain.
We are a group of 4 teachers that have been working VTS in our school with students between 12 and 14 years old during the last 5 years.
This year we have started a new research related to a two stages VTS activity. The first stage is carried out by one group of students and consists of describing a image. The second stage is developed by a different group of students and consists of drawing a picture after having listened the description done by the first group.
The results of this research can be applied to teaching people with a disability (blind people).
Do any of you have been working in this field?.
Relevant answer
Answer
Yes even i m doing research in same field.I m doing my research on junior college students who are describing diagram looking at their characteristics features.
  • asked a question related to Images
Question
3 answers
How do we change the fluorescence intensity of the confocal microscopy to quantitative result by image J?
Relevant answer
Answer
Dear Albrakati,
I would say any image you acquire should store a value in each pixel (eg: from 0-255 for 8-bit images).
In FIJI, you can open these images and select an ROI (region of interest - a rectangle, polygon, or circle where you want to measure).
Then you press M (measure - Menu > Analyze > Measure). This should pop up a Results table in another window.
PS: you can select your desired measurement parameters in Menu > Analyze > Set Measurements...
Cheers from Portugal,
Vítor Yang
  • asked a question related to Images
Question
3 answers
This dot appeard instead a visible distinct band. the concentration of the DNA was 30 microgeam/ml
Can you help me please if this is a band or not and why it appears like that ?
Relevant answer
Answer
That is a concentration and not an amount. How much did you add?
and how do you know that concentration is accurat?
  • asked a question related to Images
Question
2 answers
I am looking for a new source for filters for mosquito mouth aspirators similar to the attached image. Any suggestions much appreciated
Relevant answer
Answer
@Lukas Lischer unfortunately I have not found a good substitute
  • asked a question related to Images
Question
7 answers
While clipping the image, some pixels are coming out of the shape file. Please suggest the proper way to resolve this problem in the Arc map.
Relevant answer
Choose the "Clip" or "Extract by Mask" function from the Geoprocessing tools, select the raster layer, specify the shapefile as the clipping boundary, and execute the tool to clip the raster image precisely according to the shapefile boundaries.
  • asked a question related to Images
Question
4 answers
I had a lot of satellite images and I already segmented them after previous step I wanted to calculate the real meter square of buildings which in the image from the images was segmented.
Now, I am stuck at this step please give me instructions or a formula
Thanks & Best Regards
Relevant answer
Answer
Dear Loc Loc ,
To calculate the real-world area in square meters from segmented building regions in satellite images, you'll need to perform a process known as "georeferencing" or "geo-referencing." This involves mapping the pixels in your segmented image to their corresponding locations on the Earth's surface.
Regards,
Shafagat
  • asked a question related to Images
Question
1 answer
I have conducted a detached leaflet inoculation assay to measure/compare disease severity. I used image analysis software to get the % diseased area. While using these actual numbers is the usual practice, is it acceptable to convert those percentages into a scale, such as 0-15, or 1-12 like the H-B scale used for visual estimation?
Relevant answer
Answer
s. Jacob Schneider I assume that this question is related to the recent one (Do you recommend using a Disease Severity Index if you have digitally measured diseased area?). Transformation of leaf area percentages into a scale is possible, but you will lose a significant information content.
  • asked a question related to Images
Question
1 answer
Explore the relationship between chroma subsampling and image quality in digital multimedia. Seeking insights into the effects of color information compression on visual fidelity.
Relevant answer
In my view, The loss of color information may lead to visible artifacts, such as color bleeding or blockiness, especially in areas with rapid color changes. These artifacts are more noticeable in images or videos with fine color details or gradients.
Indeed, chroma subsampling is a compromise between efficient data representation and color accuracy. The choice of subsampling ratio depends on the specific requirements of the application, the characteristics of the content, and the acceptable level of visual quality for the intended audience.
  • asked a question related to Images
Question
1 answer
Hi, this is haribabu. i have one doubt regarding the optimization algorithm. my request is how to apply optimization algorithm to image for weights.
Relevant answer
Hello Haribabu! If you're looking to apply an optimization algorithm to adjust weights in an image, it sounds like you might be interested in optimizing the parameters of a model, such as in the context of training a neural network. Here's a general guide on how optimization algorithms are commonly used in deep learning for adjusting weights in a model:
Define your Model:
Start by defining the architecture of your model, including the number of layers, types of layers (e.g., convolutional, fully connected), and the activation functions.
Loss Function:
Choose a loss function that measures the difference between the model's predictions and the actual target values. This is the function that you want to minimize during the optimization process.
Optimization Algorithm:
Select an optimization algorithm that will adjust the weights of your model to minimize the chosen loss function. Common optimization algorithms include Stochastic Gradient Descent (SGD), Adam, RMSprop, and others. These algorithms work by iteratively updating the model weights in the direction that reduces the loss.
Backpropagation:
Implement the backpropagation algorithm to calculate the gradient of the loss function with respect to the model's parameters (weights). This gradient information is used by the optimization algorithm to update the weights in the right direction.
Training Loop:
Create a training loop where you feed your training data into the model, calculate the loss, perform backpropagation to compute gradients, and then update the model weights using the chosen optimization algorithm. Repeat this process for multiple epochs or until the model converges.
  • asked a question related to Images
Question
4 answers
Hello,
I have an image with 3 bands RGB and the other one with the NIR band, so how can I align these 2 images into one image with 4 bands in ArcGIS? thanks.
Relevant answer
Answer
Thanks a lot! I think that it worked. I selected and exported the first band of my NIR image as "red" and then I merged RGB bands with that, now I have a 4 band raster that the 4th band is NIR.
Best regards,
Maryam
  • asked a question related to Images
Question
2 answers
Is it possible to calculate flood inundation level form sentinel 1 radar image?
Relevant answer
Answer
  • asked a question related to Images
Question
10 answers
I m working on reversible data hiding technique. So, first i encrypt image using RSA algo. then embedding data in encrypted image. But i dont know how to decrypt image and get original image
Relevant answer
Answer
Please have a look on my papers:
A. Abusukhon, Z. Mohammad, A. Al-Thaher (2021) An authenticated, secure, and mutable multiple-session-keys protocol based on elliptic curve cryptography and text_to-image encryption algorithm. Concurrency and computation practice and experience. [Science Citation Index].
A. Abusukhon, N. Anwar, M. Mohammad, Z., Alghanam, B. (2019) A hybrid network security algorithm based on Diffie Hellman and Text-to-Image Encryption algorithm. Journal of Discrete Mathematical Sciences and Cryptography. 22(1) pp. 65- 81. (SCOPUS). https://www.tandfonline.com/doi/abs/10.1080/09720529.2019.1569821A. Abusukhon, B.Wawashin, B. (2015) A secure network communication protocol based on text to barcode encryption algorithm. International Journal of Advanced Computer Science and Applications (IJACSA). (ISI indexing). https://thesai.org/Publications/ViewPaper?Volume=6&Issue=12&Code=IJACSA&Seri alNo=9
A. Abusukhon, Talib, M., and Almimi, H. (2014) Distributed Text-to-Image Encryption Algorithm. International Journal of Computer Applications (IJCA), 106 (1). [ available online at : https://www.semanticscholar.org/paper/Distributed-Text-to-Image-Encryption-Algorithm-Ahmad-Mohammad/0764b3bd89e820afc6007b048dac159d98ba5326]
A. Abusukhon (2013) Block Cipher Encryption for Text-to-Image Algorithm. International Journal of Computer Engineering and Technology (IJCET). 4(3) , 50-59. http://www.zuj.edu.jo/portal/ahmad-abu-alsokhon/wpcontent/uploads/sites/15/BLOCK-CIPHER-ENCRYPTION-FOR-TEXT-TO-IMAGE ALGORITHM.pdf
A. Abusukhon, Talib, M. and Nabulsi, M. (2012) Analyzing the Efficiency of Text-to-Image Encryption Algorithm. International Journal of Advanced Computer Science and Applications ( IJACSA )(ISI indexing) , 3(11), 35 – 38. https://thesai.org/Publications/ViewPaper?Volume=3&Issue=11&Code=IJACSA&Seri alNo=6
A. Abusukhon, Talib M., Issa, O. (2012) Secure Network Communication Based on Text to Image Encryption. International Journal of Cyber-Security and Digital Forensics (IJCSDF), 1(4). The Society of Digital Information and Wireless Communications (SDIWC) 2012. https://www.semanticscholar.org/paper/SECURENETWORK-COMMUNICATION-BASED-ON-TEXT-TO-IMAGE-Abusukhon-Talib/1d122f280e0d390263971842cc54f1b044df8161
  • asked a question related to Images
Question
2 answers
If anyone has any kind of solution for this error, please help me out.
Relevant answer
Answer
Hello sir,
did you found any solution for this bugg, because i have the same problems
  • asked a question related to Images
Question
4 answers
𝟮𝟬𝟮𝟰 𝟱𝘁𝗵 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻, 𝗜𝗺𝗮𝗴𝗲 𝗮𝗻𝗱 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗖𝗩𝗜𝗗𝗟 𝟮𝟬𝟮𝟰) 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗵𝗲𝗹𝗱 𝗼𝗻 𝗔𝗽𝗿𝗶𝗹 𝟭𝟵-𝟮𝟭, 𝟮𝟬𝟮𝟰.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 1, 2024
Registration Deadline: March 1, 2024
Final Paper Submission Date: March 15, 2024
Conference Dates: April 19-21, 2024
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
- Vision and Image technologies
- DL Technologies
- DL Applications
All accepted papers will be published by IEEE and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Relevant answer
Answer
Great opportunity!
  • asked a question related to Images
Question
3 answers
..
Relevant answer
Answer
Dear Doctor
"Compared to its predecessors, the main advantage of CNN is that it automatically detects the important features without any human supervision. This is why CNN would be an ideal solution to computer vision and image classification problems."
  • asked a question related to Images
Question
3 answers
hi guys,
the image is attached. please guide me. the average pore diameters were quantified but the reviewer asked me
1. how to determine the average pore wall thickness for each layer?
2 how do we quantify the directionality of the pores in the intermediate?
Relevant answer
Answer
AI is not required here. Basic image processing will do. All the necessary functions are available in OpenCV.
For pore thickness you can first create an edge-image using e.g. Canny edge detection, and then perform distance transform. That will give you the radius of the pores. From there, you can figure out how to derive the quantities you want.
For directionality, you can compute the image brightness derivatives in the x- and y- directions, gx and gy respectively, and then compute the direction of the derivative by doing atan(gy/gx). The pores are perpendicular to the direction of the derivative, so the angle you're looking for (in radians) will be equal to pi/2 - atan(gy/gx).
This can all be achieved if you have at least some knowledge of Python. There are plenty of good tutorials on OpenCV with examples similar to what you need to do here.
  • asked a question related to Images
Question
3 answers
I was checking the position of one of the Greek islands on Google Earth and got distracted by some interesting structures shown on the seafloor north of Crete (see attached Google Earth image): first a pair of parallel lines, some 10km apart, running NW-SE in the upper right part of the image (maybe faults?); and secondly, the curious arrays of evenly spaced km-sized pimples north of Heraklion and Malia - what on Earth are they? Or could they have arisen as artefacts from the image processing, somehow (but seems doubtful)?
Just idle curiosity, but I'd be interested to hear if anyone can throw light on them.
Thanks,
Peter Skelton
Relevant answer
Answer
Thanks, Reuben. The possibility of artefact had occurred to me and your suggestion of that for the parallel lines sounds plausible. For the pimples, from their km scale I'd wondered if they might be olistoliths, but their seemingly regular arrangement is curious - though I wonder if that regularity might not itself be an imaging artefact? Pure inexpert guess on my part! Any other ideas out there?
  • asked a question related to Images
Question
1 answer
This is an SEM model showing impact of 5 sensory elements on Brand image.panel member said the arrow must be pointed towards brand image but it is not possible in AMOS
Relevant answer
Answer
Think of the arrows as statements about causality. In your model you are saying "Brand Image causes Auditory", also "Auditory causes MAT" and so on. This is the normal approach in SEM with AMOS. You have observed variables, (MAT, MDE, MPE, etc) that are manifestations of unobserved, or latent, constructs (Auditor, Olfactory, etc). In your model, these unobserved constructs are themselves manifestations of a second-order latent construct, Brand Image.
So that is the theory that you are presenting in your model: your observed (or 'manifest') variables are a result of some other construct that we cannot measure directly, the latent construct Brand Image.
Your panel members believe that arrows - the causal links - should point to Brand Image from Auditory, Olfactory, etc. That would be to say that Auditory causes Brand Image (along with the other first-order latent constructs). You can do that only if you have separate independent measures of Brand Image. That is, where you can say that observed variables such as 'value for money', 'prestige', 'innovativeness', and so on (see Plumeyer et al. for many other examples) are manifestations of Brand Image.
Your model is not a structural model of causal relationships among constructs. It is a model where you propose that Brand Image, which we cannot observe directly, produces five perceptions (Auditory, Olfactory, etc., which we also cannot observe directly), and each of these is manifested by about seven observed variables.
This is not wrong as such. It could be useful if you want to show that your measures are good at measuring your constructs.
Having said that, you should also do the following:
* Present the standardised coefficients. This helps you and the reader understand how much stronger some coefficients are than others. Check the statistical significance of the coefficients from your constructs to your manifest variables. You will probably find that you can comfortably use just four or five manifest variables for each construct instead of the seven or eight that you have.
* Before including the second-order latent construct Brand Image to the model, check the measurement model. (correlate each of the first-order constructs with each other first-order construct, and then check the sign and strength of the coefficients to decide if you really need that observed variable in the model)
* Check for common-method bias and decide if you really need some observed variables.
There are lots of 'how-to' tutorials online and YouTube videos to show you how to do these, Shivam Bhardwaj
Good luck.
Reference:
Plumeyer, A., Kottemann, P., Böger, D. et al. Measuring brand image: a systematic review, practical guidance, and future research directions. Review of Managerial Science 13, 227–265 (2019). https://doi.org/10.1007/s11846-017-0251-2
  • asked a question related to Images
Question
1 answer
what is horizontal size of afm image while using gywddion software
Relevant answer
Answer
Assuming you have imported your data correctly and you are using the default settings, you should have rulers on the left and top side of the imae which answer your question.
  • asked a question related to Images
Question
5 answers
I have a sem image and I want to know the particle size distribution in the image field.
Relevant answer
Answer
I'm afraid your particles are much too agglomerated to do anything. I'm pretty sure that no usual image analysis procedure will be able to separate them.
When working on that type of dispersion/suspension, I generally dilute them, further disperse them under ultrasonics and finally spin coat them on a SEM stub, so that particle dispersion is as optimal as can be and they become single objects.
Not even sure machine learning IA may help...
  • asked a question related to Images
Question
4 answers
Radiation is joking with the Second Law of Thermodynamics, and scientists have been tricked. Below is a comparative description.
A--Output of the second law of thermodynamics
B--The experimental performance of radiation.
1A, Second Law of Thermodynamics: Heat cannot spontaneously transfer from low to high temperatures.
1B, thermal radiation: Low temperatures can radiate to high temperatures, while high temperatures can radiate to low temperatures.
2A, scientists bet on the heat transferred by radiation: q (T1_to_T2)>q (T2_to_T1), where T1>T2
2B, actual intensity of thermal radiation:
q (T1_to_T2)=q (T1, n1); Q (T2_to_T1)=q (T2, n2)
n1, n2- Number of internal radiation structures of heat sources 1,2. Specific examples: 1 is helium, 2 is CO2, and n1 will be less than n2 In this case,
q(T1_to T2)<q (T2_to T1) where T1>T2
3A,Scientists from the 17th to 18th centuries believed that knowledge like 2A could be forgiven.
3B, scientists in the 21st century still believe in knowledge like 2A, which would be a bit foolish.
4B, see simulation case (image) for details
Relevant answer
Answer
Radiation cooling or heating does not consume any work. We welcome guidance from thermal scientists and engineers.
This is a radiation simulation case using COMSOL, which satisfies empirical laws and energy conservation. See image for details
1. This setting includes radiation experience: when the gas density is small, the radiation intensity is proportional to the density, and the absorption coefficient is inversely proportional to the density (the smaller the absorption coefficient, the stronger the absorption capacity)----- Domain 1 gas density=1, Domain 2 gas density=2.,
2. Radiation generates a temperature difference of 2.1 ℃, rendering the second law of thermodynamics invalid.
3. This transposition can be connected in series to generate stronger heating and cooling capabilities, with low cost, and can be industrialized and commercialized.
4。This article also includes an analysis of the imbalance in calculating radiation. Welcome to read.
  • asked a question related to Images
Question
2 answers
I have the LULC (Land Use Land Cover) image set; please recommend me any standard way to improve the accuracy of these images.
Relevant answer
Answer
I agree with Nikolaos Tziokas. If you did your own LULC classifications, you can try to repeat your analysis changing algorithm parameters to improve them. But, if you downloaded a LULC dataset, there is not much to do.
  • asked a question related to Images
Question
2 answers
The picture shows a GITT diagram of a graphite and silicon composite half cell. Why does it indicate a reversible to higher voltage in the circles shown? Is it due of the electrode's high resistivity, or is there another reason?
Relevant answer
Answer
if you measure (some diagnostic) EIS[1], you might identify the reason.
1. Vdc,polarization inside the range = [0.25, 0.30] V
  • asked a question related to Images
Question
3 answers
I want to create a image dataset for training YOLOv8 model for an autonomous robot working in steel structure assembly site. As it is difficult to get specific dataset related to steel structure assembly robot, I am planning to use synthetic images for training and then deploy it in the real world. I am currently using free assets such as trucks, forklift, crane, cement mixer, human etc. from unity asset store and Unity perception package for automatic labelling. I tried different randomizers. But, I am not satisfied with the trained model as it fails to detect almost 50% time while testing with real images.
Any suggestions /expert opinion for creating an effective synthetic dataset will be highly appreciated.
Relevant answer
Answer
  1. GANs (Generative Adversarial Networks):TensorFlow and PyTorch with GANs: TensorFlow and PyTorch are deep learning frameworks that provide implementations for GANs. You can use these frameworks to train GAN models to generate synthetic images based on a given dataset.
  2. Image Synthesis Libraries:imgaug: imgaug is a powerful image augmentation library in Python that can be used to create variations of existing images, effectively expanding your dataset. Augmentor: Augmentor is another Python library specifically designed for image augmentation.
  3. 3D Rendering Engines:Blender: Blender is a 3D content creation suite that can be used for rendering synthetic images. It's highly versatile and can generate realistic scenes. Unity3D with ML-Agents: Unity3D, a game development engine, can be used in conjunction with ML-Agents toolkit for creating synthetic datasets.
  4. Data Augmentation Tools:Albumentations: Albumentations is a fast image augmentation library for Python. While it is designed for augmenting existing datasets, it can be used creatively to generate synthetic data as well.
  5. Microsoft Research Deep Image Prior (DIP):Deep Image Prior: While primarily designed for image restoration tasks, Deep Image Prior can be used to generate realistic images based on the given context.
  6. PokeGAN:PokeGAN: If you're specifically interested in generating synthetic Pokémon images, PokeGAN is a GAN-based model designed for this purpose.
  • asked a question related to Images
Question
1 answer
The cells mentioned here is present both in outer and inner phloem
Relevant answer
Answer
Hi, In the image provided, the structure indicated by the arrow within the phloem is most likely a sclereid.
  • asked a question related to Images
Question
2 answers
Hi!
I have tried the copy and paste -option but the diagram gets blurry. “Print to PDF” is better as it saves the quality of the image. However, for some reason, one side of the diagram turns into a big black space in PDF format.
I need the diagram in pdf, jpg, jpeg, png or tiff -format and with 300 dpi resolution for it to be printable. How could I solve this problem?
Kind regards
Katariina Kätkä
Relevant answer
Answer
It seems this option is no longer available in Amos v.28. The user manual advises to do - Edit - Copy to clipboard, then paste into Word or any other app. I tried this approach and the image is of good quality.
  • asked a question related to Images
Question
2 answers
Hello everyone
I need the source code of the paper "Image deblurring via extreme channels prior 2017" for my recent research work. Does anyone have this code to share?
Relevant answer
Answer
You can contact the authors or check paperswithcode.com, in case the code has been published.
These should be the most likely places to find the code.
  • asked a question related to Images
Question
4 answers
I am looking for a high-resolution dataset (alternative to ImageNet) that has classes with sub-groups. I need this dataset for the domain transfer experiment. Basically, I will be using DNN pre-trained on ImageNet to extract features. Example: in CIFAR100, 100 classes are grouped into 20 super classes, such that each super classes have some sub-classes. I need similar dataset, but it has to be high resolution. Can you suggest any suggest one?
Relevant answer
Answer
FGVC-Aircraft, iNaturalist 2018, 2019, and 2021, WebVision-1000, PASS (ImageNet without Human)
  • asked a question related to Images
Question
2 answers
I am studying the influence of current beauty standards on self-esteem and body image f young women, what would be the ideal research design, qualitative or quantitative?
Relevant answer
Answer
Also, it can be considered both quantitative and qualitative.
  • asked a question related to Images
Question
4 answers
There are many articles published on quantum image encryption, either based on chaos applications or any other techniques. Still, for the diffusion process, every author used quantum XOR using the CNOT or CCNOT gate, and I have done the same thing using a newly defined chaotic map. Still, I have one doubt whenever we test or encrypted image with NPCR and UACI test, the test results come very low even not close to 99.60 and 33.46. So, my question is how authors got the excellent NCR and UACI results. Please, can anyone help me with this issue?
Relevant answer
Answer
Hi,
The NPCR (Number of Pixel Change Rate) and UACI (Unified Average Change Intensity) tests are commonly used to evaluate the quality of encrypted images in terms of their resistance against attacks and the level of diffusion achieved. The higher the NPCR and the lower the UACI, the better the diffusion properties of the encryption algorithm.
If the NPCR and UACI results you obtained are significantly lower than the expected values of 99.60 and 33.46, respectively, it suggests that the diffusion process in your encryption algorithm may not be effective enough. This could be due to a variety of reasons, including the choice of the chaotic map, the implementation of the diffusion process, or errors in the encryption algorithm.
Here are a few things you can consider to improve the NPCR and UACI results:
Chaotic map selection: Ensure that the chaotic map you are using has good statistical properties and provides sufficient diffusion. Different chaotic maps have different properties, and some may be better suited for image encryption than others. You could try experimenting with different chaotic maps to see if they improve the diffusion properties of your algorithm.
Diffusion process: Review the diffusion process in your encryption algorithm. It's possible that the diffusion process is not adequately spreading the pixel changes throughout the encrypted image. You may need to revisit the diffusion mechanism and consider alternative techniques to enhance the diffusion properties.
Implementation errors: Double-check your implementation of the encryption algorithm and the diffusion process. Small errors or inconsistencies in the code could lead to suboptimal results. Make sure you are correctly applying the diffusion operations and that there are no unintended biases or patterns in the encrypted image.
Parameter tuning: Some encryption algorithms rely on adjustable parameters that can impact the diffusion properties. Experiment with different parameter values to find the optimal settings that provide better NPCR and UACI results.
Comparison with existing algorithms: Compare your results with other established encryption algorithms that use similar techniques or chaotic maps. This can help you identify any major discrepancies and provide insights into potential improvements.
Additionally, it's worth noting that achieving the exact NPCR and UACI values reported in the literature may not always be feasible due to variations in implementation, image datasets, and testing methodologies. However, it's important to strive for results that are close to the reported values to ensure the robustness of your encryption algorithm.
Please recommend my reply if you find my reply useful .Thanks
  • asked a question related to Images
Question
1 answer
Since humans are made in God’s image, how could God not be humanist?
Relevant answer
The discussion seems to ask first of all to differentiate between what is the human being made in the image of God, and, on the other hand, what humanism is. I think it is two different things because to affirm that the human being is made in the image of God means that God creates the human being in his image, as if he were a copy of God, therefore, in the human being there is something divine, of God, but as a copy, as an image, not as an original. On the other hand, humanism is a cultural and philosophical movement that places the human being at the center of the Universe. There can be different humanisms, for example Marxist, Christian, atheist, among others. It would be appropriate to clarify what humanism the question refers to. If, for example, we accept that God is a personal God and that he creates human beings in his image, then we seem to be dealing with a Christian humanism. From this point of view, it seems that God is a humanist because his most important work is the human being, a copy of him.
  • asked a question related to Images
Question
3 answers
I am currently exploring the latest developments in medical image denoising and would greatly appreciate any insights or recommendations on state-of-the-art methodologies. If you are aware of recent advancements, innovative algorithms, or key research papers in this field, I would be grateful for your guidance.
Thank you for your time and expertise.
Relevant answer
Answer
Can you make a sample image available?
  • asked a question related to Images
Question
1 answer
I’m currently working with a Zeiss observer 7 microscopes, aiming to achieve optimal clarity with the images. However, I’m facing challenges in obtaining high quality, clear images which can be observed in the attached image. I would greatly appreciate any insights recommendations, or best practices from research expertise with similar imaging setups.
Relevant answer
Answer
You could try deconvolution. This is done after the image is taken and is a computational aproach to remove blur from images caused by outout plane light or light scattering. There is likely a deconvolution setting in the image caputure software you are using or it can be done in imagej
  • asked a question related to Images
Question
2 answers
In order to analyse the effect of data manipulation on DWI images, it is commonly interesting to understand the effect of number of gradients.
When a original DWI data is acquired with a substantial quantity of gradients (e.g. N > 60), using a single-shell DTI acquisition, it would be possible to create other DWI volumes with less number of gradients.
What are the computational tools or methodologies debated in the literature that support this kind of image processing?
Relevant answer
Answer
Some commonly debated approaches:
1. Subsampling: This method involves randomly selecting a subset of gradients from the original DWI dataset to create a new dataset with a reduced number of gradients. The subsampling process should be carefully designed to ensure that the selected gradients are representative of the diffusion information in the original dataset. Various subsampling strategies have been proposed, such as uniform random sampling or stratified sampling based on diffusion directions.
2. Interpolation: Interpolation techniques can be used to estimate the missing data points for the reduced set of gradients. These techniques utilize the information from the neighboring gradients to infer the diffusion signal at the missing gradient directions. Various interpolation methods, such as linear interpolation, spherical harmonics interpolation, or B-spline interpolation, have been explored for this purpose.
3. Reconstruction: Reconstruction methods aim to recover the complete diffusion signal by leveraging advanced algorithms and mathematical models. Compressed sensing (CS) is a widely used technique in DWI reconstruction, which exploits the sparsity of diffusion signals in a certain transform domain (e.g., wavelet domain) to reconstruct the missing information. Other reconstruction techniques, such as low-rank matrix completion or dictionary learning-based methods, have also been investigated.
4. Image registration: Image registration techniques can be employed to align the DWI volumes acquired with different numbers of gradients. By registering the DWI volumes to a common space, it is possible to compare and analyze the effects of data manipulation more accurately. Nonlinear registration algorithms, such as diffeomorphic registration, are often used to handle the deformation caused by the differences in gradient sets.
5. Quantitative evaluation: To assess the impact of data manipulation on DWI, various quantitative metrics can be employed. These metrics include measures of diffusion tensor properties (e.g., fractional anisotropy, mean diffusivity), diffusion model fitting errors, or statistical analysis of diffusion properties in specific regions of interest (ROIs). These evaluations can provide insights into the quality and reliability of the processed DWI data.
It is worth noting that the choice of computational tools and methodologies depends on the specific research objectives and the characteristics of the DWI data.
Hope it helps:credit AI
  • asked a question related to Images
Question
2 answers
Scale bar is 100 μm and their lengths vary from 130 μm to 240 and 350 μm.
Image is taken by optical microscope. Samples might have been in contact with dirty (pond, river) water.
Relevant answer
Answer
Thank you Carmen Gallas !
  • asked a question related to Images
Question
1 answer
Hello everyone,
I want to use the AOD product (MCD19A2 - MODIS/Terra+Aqua Land Aerosol Optical Depth Daily L2G Global 1km SIN Grid) as an input in a deep learning model (e, g CNN). My question is how to use that AOD data? will it need preprocessing if yes - how? How the model get the training from the images mean how it will understand the AOD pattern? how the image should be prepare/process as attached?
Relevant answer
Answer
The use of satellite data in deep learning models has gained significant attention in recent years. One such dataset, the AOD product (MCD19A2 - MODIS/Terra+Aqua Land Aerosol Optical Depth Daily L2G Global 1km Sin Grid), offers valuable information about aerosol optical depth. However, the question arises: how can we effectively use this AOD data as an input in a deep learning model, particularly a Convolutional Neural Network (CNN)? In this response, I explore the necessary preprocessing steps for utilizing AOD data and discuss how the model can be trained to understand the AOD pattern.
Preprocessing is an essential step when working with any type of data, and AOD data is no exception. The first step would involve normalizing the AOD values to ensure they fall within a specific range. This normalization process helps prevent any bias towards certain values and allows for better convergence during training. Additionally, it is crucial to handle missing or invalid values appropriately by either imputing them or removing them from the dataset.
Another important consideration is spatial resolution. The original AOD dataset may have a high spatial resolution, such as 1km x 1km pixels. However, CNNs typically work best with lower-resolution images due to memory constraints and computational efficiency. Therefore, it might be necessary to downsample or aggregate the AOD data into larger pixels before feeding it into the CNN model.
Furthermore, combining AOD data with other relevant satellite imagery can enhance its predictive power. For instance, incorporating spectral bands from multispectral sensors like MODIS or Landsat can provide additional contextual information that complements the AOD measurements. This fusion of different datasets can help capture more complex patterns and improve overall model performance.
Now let's delve into how a CNN model can be trained to understand the AOD pattern using these preprocessed images as input. Initially, a labeled training dataset would be required, consisting of AOD values and corresponding ground truth labels. These labels could represent various categories, such as pollution levels or aerosol types.
During the training process, the CNN model learns to extract relevant features from the input images that are indicative of the desired output. In this case, it would learn to identify patterns in the AOD data that correspond to specific pollution levels or aerosol types. This learning is achieved through a combination of convolutional layers, pooling layers, and fully connected layers within the CNN architecture.
To ensure effective training, it is crucial to have a diverse and representative dataset that covers a wide range of AOD patterns. This can help prevent overfitting and improve generalization performance when applied to unseen data.
Finally, how should the image be prepared or processed before being attached? As mentioned earlier, downsampling or aggregating the AOD data into larger pixels is necessary for compatibility with CNN models. Additionally, any other satellite imagery being used should also undergo preprocessing steps like radiometric calibration and atmospheric correction to account for sensor-specific biases and atmospheric interference.
In conclusion, utilizing AOD data as an input in a deep learning model like a CNN requires several preprocessing steps. Normalizing values, handling missing data appropriately, downsampling or aggregating spatial resolution, and combining with other relevant datasets are essential considerations. The model can then be trained using labeled datasets to understand patterns in the AOD data. Preparing images involves downsampling AOD data and applying radiometric calibration and atmospheric correction if necessary. By following these steps effectively, we can harness the power of deep learning models to analyze AOD patterns accurately and derive valuable insights from satellite imagery.
  • asked a question related to Images
Question
3 answers
I want to optimize it using Gaussian 09, however its giving error #2070. What parameters should we chose for optimizing using DFT? The image of crystal structure is attached below.
Any clarifications will be appreciated.
Thanks
Relevant answer
Answer
Greetings,
Your compound is not a discrete molecule, as you must know and as seen in the 3x3x3 packing attached bellow. So, this implies that you should model it using periodic boundary conditions (PBC). For PBC, you can find the syntax and an example here https://gaussian.com/pbc/, the parameters that you can tweak are also shown in the same page.