Science topic
Image Processing - Science topic
All kinds of image processing approaches.
Questions related to Image Processing
2024 4th International Conference on Computer, Remote Sensing and Aerospace (CRSA 2024) will be held at Osaka, Japan on July 5-7, 2024.
Conference Webiste: https://ais.cn/u/MJVjiu
---Call For Papers---
The topics of interest for submission include, but are not limited to:
1. Algorithms
Image Processing
Data processing
Data Mining
Computer Vision
Computer Aided Design
......
2. Remote Sensing
Optical Remote Sensing
Microwave Remote Sensing
Remote Sensing Information Engineering
Geographic Information System
Global Navigation Satellite System
......
3. Aeroacoustics
Aeroelasticity and structural dynamics
Aerothermodynamics
Airworthiness
Autonomy
Mechanisms
......
All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: May 31, 2024
Registration Deadline: May 31, 2024
Conference Date: July 5-7, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Call for Papers
CMC-Computers, Materials & Continua new special issue“Emerging Trends and Applications of Deep Learning for Biomedical Signal and Image Processing”is open for submission now.
Submission Deadline: 31 March 2025
Guest Editors
- Prof. Batyrkhan Omarov, Al-Farabi Kazakh National University, Kazakhstan
- Prof. Aigerim Altayeva, International Information Technology University,Kazakhstan
- Prof. Bakhytzhan Omarov, International University of Tourism and Hospitality, Kazakhstan
Summary
In this special issue, we delve into the cutting-edge advancements and transformative applications of deep learning techniques within the realms of biomedical engineering and healthcare. Deep learning, a subset of artificial intelligence, has emerged as a groundbreaking tool, offering unparalleled capabilities in interpreting complex biomedical signals and images. This issue brings together a collection of research articles, reviews, and case studies that highlight the innovative integration of deep learning methodologies for analyzing physiological signals (such as EEG, ECG, and EMG) and medical images (including MRI, CT scans, X-rays, and etc.).
The content spans a broad spectrum, from theoretical frameworks and algorithm development to practical applications and case studies, providing insights into the current state-of-the-art and future directions in this rapidly evolving field. Key themes include, but are not limited to, the development of novel deep learning models for disease diagnosis and prognosis, enhancement of image quality and interpretation, real-time monitoring and analysis of biomedical signals, and personalized healthcare solutions.
Contributors to this issue showcase the significant impact of deep learning on improving diagnostic accuracy, enabling early detection of abnormalities, and facilitating personalized treatment plans. Furthermore, discussions extend to ethical considerations, data privacy, and the challenges of implementing AI technologies in clinical settings, offering a comprehensive overview of the landscape of deep learning applications in biomedical signal and image processing.
Through a blend of technical depth and accessibility, this special issue aims to inform and inspire researchers, clinicians, and industry professionals about the potential of deep learning to revolutionize healthcare, paving the way for more innovative, efficient, and personalized medical care.
For submission guidelines and details, visit: https://www.techscience.com/.../special.../biomedical-signal
I am delighted to announce that with the endless effort and cooperation with my brother Prof. Mostafa Elhosseini, we succeeded to wrap up our special issues, entitled, "Deep and Machine Learning for Image Processing: Medical and Non-medical Applications," with this nice Editorial paper that highlights the research innovations of the valued contributors and open the research for more future endeavors. It is worth mentioning that this special issue attracted more than 35 contributions with only 12 were published at the end. Please enjoy reading it and shout-out to my professional co-Editor, Prof. Mostafa Elhosseini, all the contributors, and Electronics Editorial Office.
The link for the paper can be found here:
2024 4th International Conference on Image Processing and Intelligent Control (IPIC 2024) will be held from May 10 to 12, 2024 in Kuala Lumpur, Malaysia.
Conference Webiste: https://ais.cn/u/ZBn2Yr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Image Processing
- Image Enhancement and Recovery
- Target detection and tracking
- Image segmentation and labeling
- Feature extraction and image recognition
- Image compression and coding
......
◕ Intelligent Control
- Sensors in Intelligent Photovoltaic Systems
- Sensors and Laser Control Technology
- Optical Imaging and Image Processing in Intelligent Control
- Fiber optic sensing technology in the application of intelligent photoelectric system
......
All accepted papers will be published in conference proceedings, and submitted to EI Compendex, Inspec and Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 19, 2024
Registration Deadline: May 3, 2024
Final Paper Submission Date: May 3, 2024
Conference Dates: May 10-12, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
2024 IEEE 7th International Conference on Computer Information Science and Application Technology (CISAT 2024) will be held on July 12-14, 2024 in Hangzhou, China.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Bioinformatics and Scientific Computing
......
◕ Intelligent Computing and Artificial Intelligence
· Basic Theory and Application of Artificial Intelligence
· Big Data Analysis and Processing
· Biometric Identification
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Intelligent Transportation
· Intelligent Transportation Systems
· Vehicular Networks
· Edge Computing
· Spatiotemporal Data
All papers, both invited and contributed, the accepted papers, will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
Important Dates:
Full Paper Submission Date: April 14, 2024
Submission Date: May 12, 2024
Registration Deadline: June 14, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
2024 3rd International Conference on Automation, Electronic Science and Technology (AEST 2024) in Kunming, China on June 7-9, 2024.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
(1) Electronic Science and Technology
· Signal Processing
· Image Processing
· Semiconductor Technology
· Integrated Circuits
· Physical Electronics
· Electronic Circuit
......
(2) Automation
· Linear System Control
· Control Integrated Circuits and Applications
· Parallel Control and Management of Complex Systems
· Automatic Control System
· Automation and Monitoring System
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 1, 2024
Registration Deadline: May 24, 2024
Final Paper Submission Date: May 31, 2024
Conference Dates: June 7-9, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
The 3rd International Conference on Optoelectronic Information and Functional Materials (OIFM 2024) will be held in Wuhan, China from April 5 to 7, 2024.
The annual Optoelectronic Information and Functional Materials conference (OIFM) offers delegates and members a forum to present and discuss the most recent research. Delegates and members will have numerous opportunities to join in discussions on these topics. Additionally, it offers fresh perspectives and brings together academics, researchers, engineers, and students from universities and businesses throughout the globe under one roof.
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
1. Optoelectronic information science
- Optoelectronics
- Optical communication and optical network
- Optical fiber communication and system
......
2. Information and Communication Engineering
- Communication and information system
- Wireless communication, data transmission
- Switching and broadband network
......
3. Materials science and Engineering
- New materials
- Optoelectronic functional materials and devices
- Bonding material
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 5, 2024
Registration Deadline: March 22, 2024
Final Paper Submission Date: March 29, 2024
Conference Dates: April 5-7, 2024
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
I am working on lane line detection using lidar point clouds and using sliding window to detect lane lines. As lane lines have higher intensity values compared to asphalt, we can use the intensity values to differentiate lane lines from the low intensity non-lane-line points. However my lane detection suffers from noisy points i.e. high intensity non-lane line points. I've tried intensity thresholding, and statistical outlier removal based on intensity, but they don't seem to work as I am dealing with some pretty noisy point clouds. Please suggest some non-AI based methods which i can use to get rid of the noisy points.
In the rapidly evolving landscape of the Internet of Things (IoT), the integration of blockchain, machine learning, and natural language processing (NLP) holds promise for strengthening cybersecurity measures. This question explores the potential synergies among these technologies in detecting anomalies, ensuring data integrity, and fortifying the security of interconnected devices.
Seeking insights on optimizing CNNs to meet low-latency demands in real-time image processing scenarios. Interested in efficient model architectures or algorithmic enhancements.
This question blends various emerging technologies to spark discussion. It asks if sophisticated image recognition AI, trained on leaked bioinformatics data (e.g., genetic profiles), could identify vulnerabilities in medical devices connected to the Internet of Things (IoT). These vulnerabilities could then be exploited through "quantum-resistant backdoors" – hidden flaws that remain secure even against potential future advances in quantum computing. This scenario raises concerns for cybersecurity, ethical hacking practices, and the responsible development of both AI and medical technology.
𝟮𝟬𝟮𝟰 𝟱𝘁𝗵 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻, 𝗜𝗺𝗮𝗴𝗲 𝗮𝗻𝗱 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗖𝗩𝗜𝗗𝗟 𝟮𝟬𝟮𝟰) 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗵𝗲𝗹𝗱 𝗼𝗻 𝗔𝗽𝗿𝗶𝗹 𝟭𝟵-𝟮𝟭, 𝟮𝟬𝟮𝟰.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 1, 2024
Registration Deadline: March 1, 2024
Final Paper Submission Date: March 15, 2024
Conference Dates: April 19-21, 2024
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
- Vision and Image technologies
- DL Technologies
- DL Applications
All accepted papers will be published by IEEE and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Today AI is emerging rapidly than ever. When it comes to production we have so many arguments about how to add artificial intelligence or where to add artificial intelligence. Even to monitor the production performance we still do it manually. Image processing AI to gather from live video data needs a lot of processing and great quality infrastructure. What do you think about monitoring production performance, can we use Image processing technology, and how to make it more precise?
It will be very helpful for me to know this my above questions.
I am trying to get an insight of the above mentioned research paper, specially about the filtering process used to remove grid artifacts. However, I find it difficult to understand it correctly.
I would be much grateful if anyone could help me to clarify a few questions that I have.
My questions are as follows:
1) what are the pixel values of the Mean filter they used? they mention about using an improved Mean filter, but what is the improvement?
2) do they apply the Mean filter in the whole patch image (seems like it), or only in the grid signal region (characteristic peak range)?
3) what do they mean by (u1,v1) being Fmax value? does that mean that the center pixel of the Mean filter is replaced by this max value?
Thanks in advance!
Hello,
I have imageJ image processing software and would like to calculate and plot the curvature in the beam using this software. I searched online and it suggests downloading the PhotoBend plugin. Can someone suggest any solution for determining the curvature of a beam using image processing software?
Recently I have started using ChimeraX, version 1.1. Unfortunately, I could not find any options to export images at a resolution greater than 96 dpi. I have tried the steps written here (http://plato.cgl.ucsf.edu/pipermail/chimerax-users/2020-September/001508.html), but this did not work for me.
Is there any way to solve this issue? It will be a great help to me.
Hello . I have a series of SEM images in which there are some nanorods and nanoparticles. I want to see how many percent of nanorods and what percent of nanoparticles are there and separate them by color. Does anyone know how to do this with MATLAB software?
What is the best way to convert events(x,y,polarity,time stamp) that are obtained from event camera via ROS to Frames for real time application?
Or is there a way to deal with these events directly without conversion?
Hi everyone
I'm facing a real problem when trying to export data results from imageJ (fiji) to excel to process it later.
The problem is that I have to change manually the dots (.) , commas (,) even when changing the properties in excel (from , to .) in order not count the numbers as thousands, (let's say I have 1,302 = one point three zero two) it count it as (1302 = one thousand three hundred and two) when I transfer to excel...
Lately I found a nice plugin (Localized copy...) that can change the numbers format locally in imageJ so it can be used easily by excel.
Unfortunately, this plugin has some bugs because it can only copy one line of the huge data that I have and only for one time (so I have to close and reopen the image again).
is there anyone that has faced this problem? Can anyone suggest me please another solutions??
Thanks in advance
Problem finally solved... I got the new version of 'Localized copy' plugin from the owner Mr Wolfgang Gross (not sure if I have the permission to upload it here).
i have worked on image processing for image fusion and image watermarking.
At present time i want to work on big data analysis and apply it in medical image processing.
I have a brain MRI dataset which contains four image modalities: T1, T2, Flair and T1 contrast-enhanced. From this dataset, I want to segment the Non-Enhancing tumor core, Peritumoral Edema and GD-enhancing tumor. I'm confused about which modality I should use for each of the mentioned diseases.
I will be thankful for any kind of help to clear up my confusion.
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
I have an image (most likely a spectrogram), may be square or rectangle, won't know until received. I need to down sample one axis, say the 'x' axis. So if a spectrogram, I will be down sampling the frequencies (the time, 'y' axis, would remain the same). I was thinking of doing a nearest neighbor of the frequency components. Any idea how can I go about this? Any suggestions would be appreciated. Thanks...
I want to overlay a processed image onto an elevation view of a ETABS model using openCV and ETABS API in c# !
I'm Eagerly waiting to get a chance in the research field image processing
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
- RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
- Image Segmentation(GLCM matrices, Wavelets(DWT))
- Classifications (SVM, CNN ,KNN, Random Forest)
- Results with Matlab Codings.
- But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
- My main target is detection normal and abnormal(disease) leaf with labels.
#image #processing #mathematics #machinelearning #matlab #deeplearning
Actually, I am working these field. Sometimes I don't understand what should I do. If anyone supervise me, I will be thankful.
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
- RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
- Image Segmentation(GLCM matrices, Wavelets(DWT))
- Classifications (SVM, CNN ,KNN, Random Forest)
- Results with Matlab Codings.
- But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
- My main target is detection normal and abnormal(disease) leaf with labels.
Attached image is collected from a paper.
Greetings!
I want to create Artificial Neural Network in MATLAB version r2015a for recognition of 8 classes of bacteria images.
Genuinely i'm having some hard times of extracting in the image processing task in MATLAB, correctly to extract the following features but with which method - method based on the threshold pixel values in the binarization of the images or using edge detection first and then to use Generalized Hough Transformation to obtain the desired shape, but honestly i dont know which approach to take with the methods i've mention.
For the data splitting of the extracted features with cvpartition.
The desired ANN architectures i'm planning to use are:
1. feedforwardnet (backprogragation ANN with Gradient descent , MSE error)
2. Cascade feedforward network
Also i'm interested to use The Cascaded-Correlation Learning Architecture.
Also is there any information out there that explains the GUI in MATLAB when the Neural Network completes the training , i want to learn more about how performance window works, Plot regression , Error histogram , Confussion matrix.
Thanks for your time!
If we acquire a tomography dataset, we can extract alot of physical properties from it, including porosity and permeability. These properties are not directly measured using conventional experiment. Instead, they are calculated using different image processing algorithms. To this end, is there any guideline on how to report such results in terms of significant digits?
Thanks.
Hello
In image processing and image segmentation studies are these values the same?
mIoU
IoU
DSC (Dice similarity coefficient)
F1 score
Can we convert them together?
As AI continues to progress and surpass human capabilities in various areas, many jobs are at risk of being automated and potentially disappearing altogether. Signal processing, which involves the analysis and manipulation of signals such as sound and images, is one area that AI is making significant strides in. With AI's ability to adapt and learn quickly, it may be able to process signals more efficiently and effectively than humans. This could ultimately lead to fewer job opportunities in the field of signal processing, and a shift toward more AI-powered solutions. The impact of automation on the job market is a topic of ongoing debate and concern, and examining the potential effects on specific industries such as signal processing can provide valuable insights into the future of work.
Dear Colleagues, I started this discussion to collect data on the use of the Azure Kinect camera in research and industry. It is my intention to collect data about libraries, SDKs, scripts and links, which may be useful to make life easier for users and developers using this sensor.
Notes on installing on various operating systems and platforms (Windows, Linux, Jetson, ROS)
- Azure Kinect camera setup (automated scripts for Linux). https://github.com/juancarlosmiranda/azure_kinect_notes
- Azure Kinect ROS Driver. https://github.com/microsoft/Azure_Kinect_ROS_Driver
SDKs for programming
- Microsoft SDK C/C++. https://learn.microsoft.com/en-us/azure/kinect-dk/sensor-sdk-download
- Azure Kinect Body Tracking SDK. https://learn.microsoft.com/en-us/azure/kinect-dk/body-sdk-download
- Github Azure Kinect SDK. https://github.com/microsoft/Azure-Kinect-Sensor-SDK
- KinZ an Azure Kinect toolkit for Python and Matlab.
- pyk4a - a simple and pythonic wrapper in Python 3 for the Azure-Kinect-Sensor-SDK. https://github.com/etiennedub/pyk4a
Tools for recording and data extraction (update 10/08/2023)
- Azure Kinect DK recorder. https://learn.microsoft.com/en-us/azure/kinect-dk/azure-kinect-recorder
- Azure Kinect Viewer. https://learn.microsoft.com/en-us/azure/kinect-dk/azure-kinect-viewer
- AK_SM_RECORDER. A simple GUI recorder based on Python to manage Azure Kinect camera devices in a standalone mode. (https://pypi.org/project/ak-sm-recorder/)
- AK_ACQS is a software solution for data acquisition in fruit orchards using a sensor system boarded on a terrestrial vehicle. It allows the coordination of computers and sensors through the sending of remote commands via a GUI. At the same time, it adds an abstraction layer on library stack of each sensor, facilitating its integration. This software solution is supported by a local area network (LAN), which connects computers and sensors from different manufacturers ( cameras of different technologies, GNSS receiver) for in-field fruit yield testing. (https://github.com/GRAP-UdL-AT/ak_acquisition_system)
- AK_FRAEX is a desktop tool created for post-processing tasks after field acquisition. It enables the extraction of information from videos recorded in MKV format with the Azure Kinect camera. Through a GUI, the user can configure initial parameters to extract frames and automatically create the necessary metadata for a set of images. (https://pypi.org/project/ak-frame-extractor/)
Tools for fruit sizing and yield prediction (update 19/09/2023)
- AK_SW_BENCHMARKER. Python based GUI tool for fruit size estimation and weight prediction. (https://pypi.org/project/ak-sw-benchmarker/)
- AK_VIDEO_ANALYSER. Python based GUI tool for fruit size estimation and weight prediction from videos recorded with the Azure Kinect DK sensor camera in Matroska format. It receives as input a set of videos to analyse and gives as result reports in CSV datasheet format with measures and weight predictions of each detected fruit. (https://pypi.org/project/ak-video-analyser/).
Demo videos to test the software (update 10/08/2023)
- AK_FRAEX - Azure Kinect Frame Extractor demo videos. https://doi.org/10.5281/zenodo.6968103
- AK_FRAEX - Azure Kinect Frame Extractor demo videos (updated with BGRA32 videos for 3d point cloud extration). https://doi.org/10.5281/zenodo.8232445
Papers, articles (update 09/05/2024)
Agricultural
- AKFruitData: A dual software application for Azure Kinect cameras to acquire and extract informative data in yield tests performed in fruit orchard environments. [https://www.sciencedirect.com/science/article/pii/S2352711022001492]
- AKFruitYield: Modular benchmarking and video analysis software for Azure Kinect cameras for fruit size and fruit yield estimation in apple orchards. [https://www.sciencedirect.com/science/article/pii/S2352711023002443]
- Assessing automatic data processing algorithms for RGB-D cameras to predict fruit size and weight in apples. [https://www.sciencedirect.com/science/article/pii/S0168169923006907]
Clinical applications/ health
- Experimental Procedure for the Metrological Characterization of Time-of-Flight Cameras for Human Body 3D Measurements. [ ]
- Hand tracking for clinical applications: validation of the Google MediaPipe Hand (GMH) and the depth-enhanced GMH-D frameworks. [ ]
Keywords:
#python #computer-vision #computer-vision-tools
#data-acquisition #object-detection #detection-and-simulation-algorithms
#camera #images #video #rgb-d #rgb-depth-image
#azure-kinect #azure-kinect-dk #azure-kinect-sdk
#fruit-sizing #apple-fruit-sizing #fruit-yield-trials #precision-fruticulture #yield-prediction #allometry
How Thermal Image Processing works in Agriculture sector?
Hello,
I am working on a research project that involves detecting cavities or other teeth problems in panoramic X-rays. I am looking for datasets that I can use to train my convolutional neural network. I have been searching on the internet for such datasets, but I didn't find anything so far. Any suggestions are greatly appreciated! Thank you in advance!
Need to publish research paper in impact factor journal having higher acceptance rate and faster review time.
How do you think artificial intelligence can affect medicine in real world. There are many science-fiction dreams in this regard!
but how about real-life in the next 2-3 decades!?
As my protein levels appears to be varying in different cell types and different layers and localization (cytoplasm/nucelus) of the root tip of Arabidopsis (in the background of Wild type and mutant plants).
I wonder what should be my approach to compare differences in protein expression levels and localization between two genotypes.
I take Z-stack in a confocal microscope, usually I make a maximum intensity profile of Z- stack and try to understand the differences but as the differences are not only in intesities but also cell types and layers in that case how should I choose the layers between two samples?
My concern is how to find out exact layers between two genotypes as the root thickness is not always same and some z-stacks for example have 55 slices and some have 60.
thanks!
I am trying to open fMRI images in my PC but (I think) no appropriate software is present in PC. Hence I am not able to open indidial images in my PC.
I have a photo of bunches of walnut fruit in rows and I want to develop a semi-automated workflow for ImageJ to label them and create a new image from the edges of each selected ROI.
What I have done until now is Segmenting walnuts from the background by suitable threshold> then select the all of the walnuts as a single ROI>
Now I need to know how can I label, the different regions of ROI and count them in numbers to add to the ROI manager. Finally, these ROIs must be cropped from their edges and new image from each walnut should save individually.
Thoughts on how to do this, as well as tips on the code to do so, would be great.
Thanks!
Basically I was Interested in Skin Diseases Detection Using Image Processing
Kindly suggest me technology to be used and a research problem
I'm currently doing research in image processing using tensors, and I found that many test images repeatedly appear across related literature. They include: Airplane, Baboon, Barbara, Facade, House, Lena, Peppers, Giant, Wasabi, etc. However, they are not referenced with a specific source. I found some of them from the SIPI dataset, but many others are missing. I'm wondering if there are "standards" for the selection of test images, and where can the standardized images be found. Thank you!
I’m currently training a ML model that can estimate sex based on dimensions of proximal femur from radiographs. I’ve taken x-ray images from ALL of the samples in the osteological collection in Chiang Mai, left side only, which came to a total of 354 samples. I also took x-ray photos of the right femur and posterior-anterior view of the same samples (randomized, and only selective few n=94 in total) to test the difference, dimension wise. I have exhausted all the samples for training the model and validating (5-fold), which results in great accuracy of sexing. So, I am wondering whether it is appropriate to test the models with right-femur and posterior-anterior view radiographs, which will then be flipped to resemble left femur x-ray images, given the limitations of our skeletal collection?
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
Thank you.
Hi everyone, so in the field of magnetometry there is a vast body of work relating to the identification of various ferromagnetic field conditions but very little devoted to that of diamagnetic anomalies in the datasets both for airborne and satellite sources. For my current application were utilizing satellite based magnetometry data and are already working on image process algorithms that can enhance the spatial resolution of the dataset for more localized ground-based analysis. However, we're having difficulties in creating any form of machine learning system that can identify the repelling forces of diamagnetic anomalies underground primarily due to the weakness of the reversed field itself. I was just wondering if anyone had any sources relating to this kind of remote sensing application or any technical principles that we could apply to help jumpstart the projects development. Thanks for any and all information.
Hello dear RG community.
I started working with PIV some time ago. It's being an excruciating time of figuring out how to deal with the thing (even though I like PIV).
Another person I know spent about 2.5 months figuring out how to do smoke viz. And yet another person I know is desperately trying to figure out how to do LIF (with no success so far).
As a newcomer to the area I can't emphasize how valuable any piece of help is.
I noticed there is not one nice forum covering everything related to flow visualization.
There are separate forums on PIV analysis and general image processing (let me take an opportunity here to express my sincere gratitude to Dr. Alex Liberzon for the OpenPIV Google group that he is actively maintaining). Dantec and LaVision tech support is nice indeed.
But, still, I feel like I want one big forum about absolutely anything related to flow vis: how to troubleshoot hardware, how to pick particles, best practices in image preprocessing, how to use commercial GUI, how to do smoke vis, how to do LIF, infraction index matching for flow vis in porous media, PIV in very high speed flows, shadowgraphing, schlieren and so on.
Reading about theory of PIV and how to do it is one thing. But when it comes to obtaining images - oh, that can easily turn to a nightmare! I want a forum where we can share practical skills.
I'm thinking about creating a flow vis StackExchange website.
Area51 is a part of StackExchange where one can propose a StakExchange website. They have pretty strict rules for proposals. Proposals have to go through 3 stages of life cycle before they are allowed to become full-blown StackExchange websites. The main criteria is how many people visit the proposed website, ask and answer questions.
Before a website is proposed, one need to ensure there are people interested in the subject. Once the website has been proposed, one has 3 days to get at least 5 questions posted and answered, preferably, by the people who had expressed their interest in the topic. If the requirement is fulfilled the proposal is allowed to go on.
Thus, I'm wondering what does dear RG community think? Are there people interested in the endeavor? Is there a "seeding community" of enthusiasts who are ready to post and answer at least 5 questions withing the first 3 days?
If so, let me know in the comments, please. I will propose a community and post the instructions for you how to register in Area51, verify your email and post and answer the questions.
Bear in mind, that since we have not only to post the questions but also answer them the "seeding community" should better include flow vis experts.
How to plot + at center of circle after getting circle from Hough transform?
I obtained the center in workspace as "centers [ a, b] ".
When I am plotting with this command
plot(centers ,'r+', 'MarkerSize', 3, 'LineWidth', 2);
then I get the '+' at a and b on the same axis.
2 Logistic chaotic sequences generation, we are generating two y sequence(Y1,Y2) to encrypt a data
2D logistic chaotic sequence, we are generating x and y sequence to encrypt a data
whether the above statement is correct, kindly help in this and kindly share the relevant paper if possible
Dear researchers.
I have recently started my research in detecting and tracking brain tumors with the help of artificial intelligence, which includes image processing.
What part of this research is valuable, and what do you suggest for the most recent part that is still useful for a PhD. research proposal?
Thank you for participating in this discussion.
website for researching a special issue dates
I am trying to make generalizations about which layers to freeze. I know that I must freeze feature extraction layers but some feature extraction layers should not be frozen (for example in transformer architecture encoder part and multi-head attention part of the decoder(which are feature extraction layers) should not be frozen). Which layers I should call “feature extraction layer” in that sense? What kind of “feature extraction” layers should I freeze?
As a generative model, GAN is usually used for generating fake samples but not classification
Dear All,
I have performed a Digital Image Correlation test on a rectangular piece of rubber to test the authenticity of my method. However, I faced this chart most of the time. Can anyone show me why this is happening? I am using Ncorr and Post Ncorr for Image processing.
Monkeypox Virus is recently spreading very fast, which is very alarming. Awareness can assist people in reducing the panic that is caused all over the world.
To do that, Is there any image dataset for monkeypox?
As a student who wants to design a chip for processing CNN algorithms, I ask my question. If we want to design a NN accelerator architecture with RISC V for a custom ASIC or FPGA, what problems or algorithms do we aim to accelerate? It is clear to accelerate the MAC (Multiply - Accumulate) procedures with parallelism and other methods, but aiming for MLPs or CNNs makes a considerable difference in the architecture.
As I read and searched, CNN are mostly for image processing. So anything about an image is usually related to CNN. Is it an acceptable idea if I design architecture to accelerate MLP networks? For MLP acceleration which hw's should I work on additionally? Or is it better to focus on CNN's and understand it and work on it more?
I have a large DICOM dataset, around 200 GB. It is stored in Google Drive. I train the ML model from the lab's GPU server, but it does not have enough storage. I'm not authorized to attach an additional hard drive to the server. Since there is no way to access Google Drive without Colab (if I'm wrong, kindly let me know), where can I store this dataset so that I will be able to access it for training from the remote server?
Could you tell me please what is the effect of electromagnetic waves on a human cell? And how to model the effect of electromagnetic waves on a human cell using image processing methods?
I am currently working on Image Processing of Complex fringes using MATLAB. I have to do the phase wrapping of images using 2D continuous wavelet transform.
I have a salt(5grains) which undergoes hydration and dehydration for 8 cycles. I have pictures of them swelling and shrinking taken every five minutes under microscope. I can see in the video that salt is swelling and shrinking if i compile the images. But I need to quantify how much increase or decrease in size takes place. Can anyone explain about how I can make use of the pictures
I am working on a classification task and I used 2D-DWT as a feature extractor. I want to ask about more details why I can concatenate 2D-DWT coefficients to make image of features. I am thinking to concatenate these coefficients(The horizontal,vertical and diagonal coeeficients) to make an image of features then fed this to CNN but I want to have an convincing and true evidence for this new approach.
Any short introductory document from image domain please.
Hello members,
I would appreciate it if someone can help me choose a topic in AI Deep Learning or Machine Learning.
I am looking for an Algorithm that can be used in different application and have some issues in terms of accuracy and result, to work on its improvement.
recommend me some papers that help me to find some gaps so I can write my proposal.
I'm looking for the name of an SCI/SCIE journal with a quick review time and a high acceptance rate to publish my paper on image processing (Image Interpolation). Please make a recommendation.
Hi all,
I am looking for experts in area of Biomedical Image Processing.
Any recommendations ?
Please share
As you can see that the image is taken by changing the camera angle to include the building in the scene. The problem with this is that the measurements are not accurate with the perspective view.
How can I fix this image for the right perspective (centered)?
Thanks.
Red Blood Cells, White Blood Cells, Sickle Cells.
Suppose I use laplacian pyramid for image denoising application, how would it be better than wavelets? I have read some documents related to laplacian tools in which laplacian pyramids are said to have better selection for signal decomposition than wavelets.
Dear Friends,
I would like to know about the best method to follow for doing MATLAB based parallel implementation using GPU of my existing MATLAB sequential code. My code involves several custom functions, nested loops.
I tried coverting to cuda-mex function using MATLAB's GPU coder, but I observed that it takes much more time (than CPU) to run the same function.
Proper suggestions will be appreciated.
Hello Researchers,
Can you guys tell me the problems or limitations of Computer Vision in this era, on which no one has yet paid heed or problems on which researchers and Industries are working but still didn't get success?
Thanks in Advance!
Dear Colleagues,
If you are researcher who is studying or already published on Industry 4.0 or digital transformation topic, what is your hottest issue in this field?
Your answers will guide us in linking the perceptions of experts with bibliometric analysis results.
Thanks in advance for your contribution.
Lane detection is a common use case in Computer Vision. Self-driving cars rely heavily on seamless lane detection. I attempted a road lane detection inspired use case, using computer vision to detect railway track lines. I am encountering a problem here. In the case of road lane detection, the colour difference between road (black) and lane lines (yellow/ white) makes edge detection and thus lane detection fairly easy. Meanwhile, in railway track line detection, no such clear threshold for edge detection exists and the output is as in the second image. Thus making the detection of track lines unclear with noise from the track slab detections etc.
This question, therefore, seeks guidance/ advice/ Knowledge exchange to solve this problem.
Any feedback on the approach taken to attempt the problem is highly appreciated.
Tech: OpenCV
I'm about to start some analyses of vegetation indexes using Sentinel-2 imagery through Google Earth Engine. The analyses are going to comprise a series of images from 2015/2016 until now, and some of the data won't be available in Level-2A of processing (Bottom-of-Atmosphere reflectance).
I know there are some algorithms to estimate BOA reflectance. However, I don't know how good these estimates are, and the products generated by Sen2Cor look more reliable to me. I've already applied Sen2Cor through SNAP, but now I need to do it in a batch of images. Until now, I couldn't find any useful information about how to do it in GEE (I'm using the Python API).
I'm a beginner, so all tips are going to be quite useful. Is it worth applying Sen2Cor or the other algorithms provide good estimates?
Thanks in advance!
I am publishing paper in scopus journal and got one comment as follows:
Whether the mean m_z is the mean within the patches 8x8? If the organs are overlap then how adaptive based method with patches 8x8 is separated? No such image has been taken as a evidence of the argument. Please incorporate the results of such type of images to prove the effectiveness of the proposed method. One result is given which are well separated.
Here I am working on method which takes patches of given image and takes mean of them. This mean is used for normalizing the data.
However, I am unable to understand the meaning of second sentence. As per my knowledge, the MRI image is kind of see through, so how will be any overlap of organs?
Any comments?
During preprocessing medical image data different techniques should be considered such as cropping, filtering, masking, augmentation. My query is, which techniques are frequently applied to medical image datasets during pre-processing?
Hello
I'm looking to generate synthetic diffusion images from T1 weighted images of the brain. I read that diffusion images are a sequence of T2 images but with gradients. Maybe could be something related to this. I'm not sure how to generate these gradients too. I'm trying to generate "fake" diffusion images from T1w because of the lack of data from the subjects I'm evaluating.
Can someone please help me?
Hello,
I have been working on computer vision. I used datasets from Kaggle or other sites for my projects. But now I want to do lane departure warning, and real-time lane detection with real-time conditions(illuminations, road conditions, traffic, etc.). Then the idea to use simulators comes to my mind but there are lots of simulators on online but I'm confused about which one would be suitable for my work!
It would be very supportive if anyone guide me through picking up the best simulator for my works.
Is it because the imaging equation used by the color constancy model is built on RAW images? Or is it because the diagonal model can only be applied to RAW images? When we train a color constancy model using sRGB images, can we still use certain traditional color constancy models such as gamut mapping, correction moments, or CNN?
Could anyone suggest a software or code (R or Python) that is capable of recognizing bumblebees (recognizing only not identifying) from video recordings?
Dear sir/madam,
Greetings for the day,
With great privilege and pleasure, i request anyone belonging to Image Processing domain to review my Ph.D thesis. I hope you will be kind enough to review my research work. Please revert me back on my email id: [email protected] at your leisure.
Thanking you in advance.
Hi. I'm working on 1000 images of 256x256 dimensions. For segmenting I'm using segnet, unet and deeplabv3 layers. when I trained my algorithms it takes nearly 10 hours of training. I'm using 8GB RAM with a 256GB SSD laptop and MATLAB software for coding. Is there any possibility to speed up training without GPU?
I am researching handwriting analysis using image processing techniques.
Does anybody can recommend me a tool that coul extract (segment) pores (lineolae) from the following image of a diatom valve?
I mean an ImageJ or FiJi plugin or any other software that can solve this task.
I'd like to measure frost thickness on fins of a HEX based on GoPro frames.
I got the ImageJ software. But I don't know if there is a way to select a zone, (a frosted fin) and deduce the average length in one direction.
Currently I do random measurements on the given fin and do the average. However, the random points may not be representative.
I attached two pictures of the fins and frost to illustrate my question.
In advance, thank you very much,
AP
Currently, I'm working on a Deep Learning based project. It's a multiclass classification problem. The dataset can be found here: https://data.mendeley.com/datasets/s8x6jn5cvr/1
I have used Transfer Learning mostly, but couldn't able to get a higher accuracy on the test set. I have used Cross-Entropy and Focal Loss as loss functions. Here, I have 164 samples in the train set, 101 samples in the test set, and 41 samples in the validation set. Yes, about 33% of samples are in the test partition (data partition can't be changed as instructed). I could able to get an accuracy score and f1 score of around 60%. But how can I get higher performance in this dataset with this split ratio? Can anyone suggest me some papers to follow? Or any other suggestion? Suggest me some papers or guidance on my Deep Learning-based multiclass classification problem?
I am working on CTU (Coding Tree Unit) partition using CNN for intra mode HEVC. I need to prepare database for that. I have referred multiple papers for that. In most of papers they are encoding images to get binary labels splitting or non-splitting for all CU (Coding Unit) sizes, resolutions, and QP (Quantization Parameters).
If any one knows how to do it, please give steps or reference material for that.
Reference papers
Hi,
In my research, I have created a new way of weak edge enhancement. I wanted to try my method on the image dataset to compare it with the active contour philosophy.
So, I was looking for images with masks, as shown in the below paper.
If you can help me to get this data, it would be a great help.
Thanks and Regards,
Gunjan Naik
I'm looking for a PhD position and opportunity in one of the English speaking university in European countries (or Australia).
I majored in artificial intelligence. I am in the field of medical image segmentation and My thesis in master was about retinal blood vessels extraction based on active contour. Skilled in Image processing, machine learning, MATLAB and C++.
So could anybody helps me to find a prof and PhD position related on my skills in one of the English speaking university?
recently i am collecting red blood cells dataset for classifying into 9 categories of Ninad Mehendale research paper. can anyone suggest the dataset for Red Blood Cell Classification Using Image Processing and CNN this papeer?
There are shape descriptors: circularity, convexity, compactness, eccentricity, roundness, aspect ratio, solidity, elongation.
1) What are the real formulas for determining these descriptors?
2) circularity = roundness? solidity = ellipticity?
I compared lectures (M.A. Wirth*) with ImageJ (Fiji) user guide and completely confused: descriptors are almost completely different! Which source to trust?
*Wirth, M.A. Shape Analysis and Measurement. / M.A. Wirth // Lecture 10, Image Processing Group, Computing and Information Science, University of Guelph. – Guelph, ON, Canada, 2001 – S. 29
Dear Researchers,
In the remote sensing application to a volcanic activity wherein, the objective is to determine the temperature, which portion (more specifically the range) of the EM spectrum can detect the electromagnetic emissions of hot volcanic surfaces (which are a function of the temperature and emissivity of the surface and can achieve temperature as high as 1000°C)? Why?
Sincerely,
Aman Srivastava
I have grayscale images obtained from SHG microscopy for human cornea collagen bundles, and I have them as tiff stack images and their Czi format. I want to convert those 2D images into a 3D volume but I could not find any method that can be done using MATLAB, Python, or any other program?
Hello dear researchers.
It seems that siam rpn algorithm is one of the very good algorithms for object tracking that its processing speed on gpu is 150 fps.But the problem is that if your chosen object is a white phone, for example, and you are dressed in white and you move the phone towards you, the whole bunding box will be placed on your clothes by mistake. So, low sensitivity to color .How do you think I can optimize the algorithm to solve this problem? Of course, there are algorithms with high accuracy such as siam mask, but it has a very low fps. Thank you for your help.
Hi
I'm trying to acquire raw data from Philips MRI.
I followed the save raw data procedures and then I obtained a .idx and a .log file.
I'm not sure if I implemented the procedure correctly.
Are .idx and .log file the file format of Philips MRI raw data?
If so, how to open these files? Is it possible to open these files in matlab?
Thanks
Judith
How can I tell the distance and proximity as well as the depth of image processing for object tracking? One idea that came to my mind was to detect whether the object was moving away or approaching based on the size of the image.But I do not know if there is an algorithm that I can implement based on?
In fact, how can I distinguish the x, y, z coordinates from the image taken from the webcam?
Thank you for your help
Hi,
What are the main image processing journals that publish work on the collection, creation and classification of medical imaging databases such as Medical Image Analysis Journal.
Thank you for your support,
I am using transfer learning using pre-trained models in PyTorch for the Image classification task.
When I modified the output layer of the pre-trained model (e,g, alexnet) as per our dataset and run the code for seeing the modified architecture of alexnet it gives output as "none".
Hi Everyone, I'm currently converting video into images where I noticed 85% of the images doesn't contain the object. Is there any algorithm to check whether an image contains an object or not using the objectness score?
Thanks in advance :)