Science topic

Image Processing - Science topic

All kinds of image processing approaches.
Questions related to Image Processing
  • asked a question related to Image Processing
Question
2 answers
2024 4th International Conference on Computer, Remote Sensing and Aerospace (CRSA 2024) will be held at Osaka, Japan on July 5-7, 2024.
Conference Webiste: https://ais.cn/u/MJVjiu
---Call For Papers---
The topics of interest for submission include, but are not limited to:
1. Algorithms
Image Processing
Data processing
Data Mining
Computer Vision
Computer Aided Design
......
2. Remote Sensing
Optical Remote Sensing
Microwave Remote Sensing
Remote Sensing Information Engineering
Geographic Information System
Global Navigation Satellite System
......
3. Aeroacoustics
Aeroelasticity and structural dynamics
Aerothermodynamics
Airworthiness
Autonomy
Mechanisms
......
All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
Important Dates:
Full Paper Submission Date: May 31, 2024
Registration Deadline: May 31, 2024
Conference Date: July 5-7, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Dear Kazi Redwan ,Regular Registration(4 - 6 pages) fee is 485 USD. Online presentation is accepted. All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
For More Details about registration please visithttp://www.iccrsa.org/registration_all
For Paper submission: https://ais.cn/u/MJVjiu
  • asked a question related to Image Processing
Question
2 answers
Call for Papers
CMC-Computers, Materials & Continua new special issue“Emerging Trends and Applications of Deep Learning for Biomedical Signal and Image Processing”is open for submission now.
Submission Deadline: 31 March 2025
Guest Editors
  • Prof. Batyrkhan Omarov, Al-Farabi Kazakh National University, Kazakhstan
  • Prof. Aigerim Altayeva, International Information Technology University,Kazakhstan
  • Prof. Bakhytzhan Omarov, International University of Tourism and Hospitality, Kazakhstan
Summary
In this special issue, we delve into the cutting-edge advancements and transformative applications of deep learning techniques within the realms of biomedical engineering and healthcare. Deep learning, a subset of artificial intelligence, has emerged as a groundbreaking tool, offering unparalleled capabilities in interpreting complex biomedical signals and images. This issue brings together a collection of research articles, reviews, and case studies that highlight the innovative integration of deep learning methodologies for analyzing physiological signals (such as EEG, ECG, and EMG) and medical images (including MRI, CT scans, X-rays, and etc.).
The content spans a broad spectrum, from theoretical frameworks and algorithm development to practical applications and case studies, providing insights into the current state-of-the-art and future directions in this rapidly evolving field. Key themes include, but are not limited to, the development of novel deep learning models for disease diagnosis and prognosis, enhancement of image quality and interpretation, real-time monitoring and analysis of biomedical signals, and personalized healthcare solutions.
Contributors to this issue showcase the significant impact of deep learning on improving diagnostic accuracy, enabling early detection of abnormalities, and facilitating personalized treatment plans. Furthermore, discussions extend to ethical considerations, data privacy, and the challenges of implementing AI technologies in clinical settings, offering a comprehensive overview of the landscape of deep learning applications in biomedical signal and image processing.
Through a blend of technical depth and accessibility, this special issue aims to inform and inspire researchers, clinicians, and industry professionals about the potential of deep learning to revolutionize healthcare, paving the way for more innovative, efficient, and personalized medical care.
For submission guidelines and details, visit: https://www.techscience.com/.../special.../biomedical-signal
Relevant answer
Answer
Dear Paulo Bolinhas Your enthusiasm for the topic is contagious, and we couldn't agree more that this special issue is a treasure trove of ideas and discoveries. We hope it will inspire further research and innovation in the field.
  • asked a question related to Image Processing
Question
7 answers
I am delighted to announce that with the endless effort and cooperation with my brother Prof. Mostafa Elhosseini, we succeeded to wrap up our special issues, entitled, "Deep and Machine Learning for Image Processing: Medical and Non-medical Applications," with this nice Editorial paper that highlights the research innovations of the valued contributors and open the research for more future endeavors. It is worth mentioning that this special issue attracted more than 35 contributions with only 12 were published at the end. Please enjoy reading it and shout-out to my professional co-Editor, Prof. Mostafa Elhosseini, all the contributors, and Electronics Editorial Office.
The link for the paper can be found here:
Relevant answer
Answer
Dear Mohamed Shehata, you call yourself successful people, but unable to answer my simple questions regarding your roles as an editor?!
Remember now and forever Mohamed that, real scientists, don't leave or block but certainly stay and answer their critics scientifically and logically, also remember Mohamed that real research is product and/or services oriented but not just some manuscripts that were published with force of money, and change nothing in our real world!!!
  • asked a question related to Image Processing
Question
2 answers
2024 4th International Conference on Image Processing and Intelligent Control (IPIC 2024) will be held from May 10 to 12, 2024 in Kuala Lumpur, Malaysia.
Conference Webiste: https://ais.cn/u/ZBn2Yr
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Image Processing
- Image Enhancement and Recovery
- Target detection and tracking
- Image segmentation and labeling
- Feature extraction and image recognition
- Image compression and coding
......
◕ Intelligent Control
- Sensors in Intelligent Photovoltaic Systems
- Sensors and Laser Control Technology
- Optical Imaging and Image Processing in Intelligent Control
- Fiber optic sensing technology in the application of intelligent photoelectric system
......
All accepted papers will be published in conference proceedings, and submitted to EI Compendex, Inspec and Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 19, 2024
Registration Deadline: May 3, 2024
Final Paper Submission Date: May 3, 2024
Conference Dates: May 10-12, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Thank you
  • asked a question related to Image Processing
Question
1 answer
2024 IEEE 7th International Conference on Computer Information Science and Application Technology (CISAT 2024) will be held on July 12-14, 2024 in Hangzhou, China.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Computational Science and Algorithms
· Algorithms
· Automated Software Engineering
· Bioinformatics and Scientific Computing
......
◕ Intelligent Computing and Artificial Intelligence
· Basic Theory and Application of Artificial Intelligence
· Big Data Analysis and Processing
· Biometric Identification
......
◕ Software Process and Data Mining
· Software Engineering Practice
· Web Engineering
· Multimedia and Visual Software Engineering
......
◕ Intelligent Transportation
· Intelligent Transportation Systems
· Vehicular Networks
· Edge Computing
· Spatiotemporal Data
All papers, both invited and contributed, the accepted papers, will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
Important Dates:
Full Paper Submission Date: April 14, 2024
Submission Date: May 12, 2024
Registration Deadline: June 14, 2024
Conference Dates: July 12-14, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Please let me know if anyone is interested to o
  • asked a question related to Image Processing
Question
1 answer
2024 3rd International Conference on Automation, Electronic Science and Technology (AEST 2024) in Kunming, China on June 7-9, 2024.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
(1) Electronic Science and Technology
· Signal Processing
· Image Processing
· Semiconductor Technology
· Integrated Circuits
· Physical Electronics
· Electronic Circuit
......
(2) Automation
· Linear System Control
· Control Integrated Circuits and Applications
· Parallel Control and Management of Complex Systems
· Automatic Control System
· Automation and Monitoring System
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 1, 2024
Registration Deadline: May 24, 2024
Final Paper Submission Date: May 31, 2024
Conference Dates: June 7-9, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Useful thing
  • asked a question related to Image Processing
Question
2 answers
The 3rd International Conference on Optoelectronic Information and Functional Materials (OIFM 2024) will be held in Wuhan, China from April 5 to 7, 2024.
The annual Optoelectronic Information and Functional Materials conference (OIFM) offers delegates and members a forum to present and discuss the most recent research. Delegates and members will have numerous opportunities to join in discussions on these topics. Additionally, it offers fresh perspectives and brings together academics, researchers, engineers, and students from universities and businesses throughout the globe under one roof.
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
1. Optoelectronic information science
- Optoelectronics
- Optical communication and optical network
- Optical fiber communication and system
......
2. Information and Communication Engineering
- Communication and information system
- Wireless communication, data transmission
- Switching and broadband network
......
3. Materials science and Engineering
- New materials
- Optoelectronic functional materials and devices
- Bonding material
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 5, 2024
Registration Deadline: March 22, 2024
Final Paper Submission Date: March 29, 2024
Conference Dates: April 5-7, 2024
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Relevant answer
Answer
Dear Sergey Alexandrovich Shoydin ,only English manuscripts are accepted in the conference.
  • asked a question related to Image Processing
Question
3 answers
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
Relevant answer
Answer
Dear Renjith Vijayakumar Selvarani and Dear Qamar Ul Islam,
Many thanks for your notice.
  • asked a question related to Image Processing
Question
5 answers
I am working on lane line detection using lidar point clouds and using sliding window to detect lane lines. As lane lines have higher intensity values compared to asphalt, we can use the intensity values to differentiate lane lines from the low intensity non-lane-line points. However my lane detection suffers from noisy points i.e. high intensity non-lane line points. I've tried intensity thresholding, and statistical outlier removal based on intensity, but they don't seem to work as I am dealing with some pretty noisy point clouds. Please suggest some non-AI based methods which i can use to get rid of the noisy points.
Relevant answer
Answer
Can you show some example images?
  • asked a question related to Image Processing
Question
4 answers
In the rapidly evolving landscape of the Internet of Things (IoT), the integration of blockchain, machine learning, and natural language processing (NLP) holds promise for strengthening cybersecurity measures. This question explores the potential synergies among these technologies in detecting anomalies, ensuring data integrity, and fortifying the security of interconnected devices.
Relevant answer
Answer
Imagine we're talking about a superhero team-up in the world of tech, with blockchain, machine learning (ML), and natural language processing (NLP) joining forces to beef up cybersecurity in IoT environments.
First up, blockchain. It's like the trusty sidekick ensuring data integrity. By nature, it's transparent and tamper-proof. So, when you have a bunch of IoT devices communicating, blockchain can help keep that data exchange secure and verifiable. It's like having a digital ledger that says, "Yep, this data is legit and hasn't been messed with."
Then, enter machine learning. ML is the brains of the operation, constantly learning and adapting. It can analyze data patterns from IoT devices to spot anything unusual. Think of it as a detective that's always on the lookout for anomalies or suspicious activities.
And finally, there's NLP. It's a bit like the communicator of the group. In this context, NLP can be used to sift through tons of textual data from IoT devices or networks, helping to identify potential security threats or unusual patterns that might not be obvious at first glance.
Put them all together, and you've got a powerful team. Blockchain keeps the data trustworthy, ML hunts down anomalies, and NLP digs deeper into the data narrative. This combo can seriously level up cybersecurity in IoT, making it harder for bad actors to sneak in and cause havoc. Cool, right?
  • asked a question related to Image Processing
Question
1 answer
Seeking insights on optimizing CNNs to meet low-latency demands in real-time image processing scenarios. Interested in efficient model architectures or algorithmic enhancements.
Relevant answer
Answer
Here are several optimization strategies for Convolutional Neural Networks (CNNs) to achieve real-time image processing with stringent latency requirements:
1. Model Architecture Optimization:
  • Reduce Model Size:Employ depthwise separable convolutions to reduce parameters and computations. Utilize smaller-sized filters (e.g., 3x3 instead of 5x5). Reduce the number of filters in convolutional layers. Consider efficient model architectures like MobileNet, ShuffleNet, or EfficientNet.
  • Employ Depthwise Separable Convolutions: These split a standard convolution into two separate operations, significantly reducing computations and parameters.
  • Channel Pruning: Identify and remove less-important channels from convolutional layers to reduce model size without compromising accuracy.
2. Quantization:
  • Reduce Precision:Quantize weights and activations from 32-bit floating-point to lower precision formats (e.g., 8-bit integers) for faster computations and smaller model size.
3. Hardware Acceleration:
  • Utilize Specialized Hardware:Deploy CNNs on GPUs, TPUs, or specialized AI accelerators (e.g., Intel Movidius, NVIDIA Jetson) optimized for deep learning computations.
4. Software Optimization:
  • Efficient Libraries:Leverage highly optimized deep learning libraries like TensorFlow Lite, PyTorch Mobile, or OpenVINO for efficient model deployment on resource-constrained devices.
  • Kernel Fusion: Combine multiple computations into a single kernel for reduced memory access and improved performance.
5. Input Optimization:
  • Reduce Image Resolution: Process lower-resolution images to reduce computational load while ensuring acceptable accuracy.
6. Model Pruning:
  • Remove Unnecessary Parameters: Identify and eliminate redundant or less-significant parameters from the trained model to reduce its size and computational complexity.
7. Knowledge Distillation:
  • Transfer Knowledge: Train a smaller, faster model to mimic the behavior of a larger, more accurate model, benefiting from its knowledge while achieving real-time performance.
8. Early Exiting:
  • Terminate Early: Allow for early decision-making in the model, especially for applications with varying levels of confidence requirements. This can reduce computations for easier-to-classify inputs.
By carefully combining these techniques, developers can create CNN-based real-time image processing systems that meet stringent latency requirements while maintaining high accuracy.
  • asked a question related to Image Processing
Question
3 answers
This question blends various emerging technologies to spark discussion. It asks if sophisticated image recognition AI, trained on leaked bioinformatics data (e.g., genetic profiles), could identify vulnerabilities in medical devices connected to the Internet of Things (IoT). These vulnerabilities could then be exploited through "quantum-resistant backdoors" – hidden flaws that remain secure even against potential future advances in quantum computing. This scenario raises concerns for cybersecurity, ethical hacking practices, and the responsible development of both AI and medical technology.
Relevant answer
Answer
Combining image-trained neural networks, bioinformatics breaches, and quantum-resistant backdoors has major limitations.
Moving from image-trained neural networks to bioinformatics data requires significant domain transfer, which is not straightforward due to the distinct nature of these data types and tasks.
Secure IoT medical devices are designed with robust security features in mind and deployed. Successful attacks requires exploiting a specific vulnerability in the implementation of security measures, rather than the reliance on neural network capabilities.
Deliberately inserting backdoors and to the extent, even quantum-resistant ones, poses ethical and legal questions that would go against norms and standards of cybersecurity practitioners. The actions would violate privacy rights on the federal level, ethical standards and codes of conduct and pose severe legal consequences. Those would be the domestic ones; assuming we're keeping the products in the US.
Quantum computers with sufficient power to break current cryptographic systems are not yet available. Developing quantum-resistant backdoors knowingly anticipates a future scenario to be truth that is still today largely theoretical, without being proven or true.
  • asked a question related to Image Processing
Question
4 answers
𝟮𝟬𝟮𝟰 𝟱𝘁𝗵 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻, 𝗜𝗺𝗮𝗴𝗲 𝗮𝗻𝗱 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗖𝗩𝗜𝗗𝗟 𝟮𝟬𝟮𝟰) 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗵𝗲𝗹𝗱 𝗼𝗻 𝗔𝗽𝗿𝗶𝗹 𝟭𝟵-𝟮𝟭, 𝟮𝟬𝟮𝟰.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 1, 2024
Registration Deadline: March 1, 2024
Final Paper Submission Date: March 15, 2024
Conference Dates: April 19-21, 2024
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
- Vision and Image technologies
- DL Technologies
- DL Applications
All accepted papers will be published by IEEE and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Relevant answer
Answer
Great opportunity!
  • asked a question related to Image Processing
Question
4 answers
Today AI is emerging rapidly than ever. When it comes to production we have so many arguments about how to add artificial intelligence or where to add artificial intelligence. Even to monitor the production performance we still do it manually. Image processing AI to gather from live video data needs a lot of processing and great quality infrastructure. What do you think about monitoring production performance, can we use Image processing technology, and how to make it more precise?
Relevant answer
Answer
At its core, AI image processing is the marriage of two cutting-edge fields: artificial intelligence (AI) and computer vision. It's the art and science of bestowing computers with the remarkable ability to understand, interpret, and manipulate visual data—much like the human visual system. Imagine an intricate dance between algorithms and pixels, where machines "see" images and glean insights that elude the human eye.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
2 answers
It will be very helpful for me to know this my above questions.
Relevant answer
Answer
MDPI is a publisher that allows you to publish in a very short time. Specifically, I suggest considering the journals 'Mathematics,' 'Applied Sciences,' and 'Journal of Imaging.' These publications are notable for their quick turnaround times. Personally, I have had the opportunity to publish in all three and have also served as a reviewer for articles in the first two journals, all of which have been positive experiences.
If you are interested in publishing in 'Mathematics' and 'Applied Sciences,' I would recommend exploring special issues related to imaging.
  • asked a question related to Image Processing
Question
1 answer
I am trying to get an insight of the above mentioned research paper, specially about the filtering process used to remove grid artifacts. However, I find it difficult to understand it correctly.
I would be much grateful if anyone could help me to clarify a few questions that I have.
My questions are as follows:
1) what are the pixel values of the Mean filter they used? they mention about using an improved Mean filter, but what is the improvement?
2) do they apply the Mean filter in the whole patch image (seems like it), or only in the grid signal region (characteristic peak range)?
3) what do they mean by (u1,v1) being Fmax value? does that mean that the center pixel of the Mean filter is replaced by this max value?
Thanks in advance!
Relevant answer
Answer
This is a fourier domain filtering technique. Note the area that is zeroed out in the IFFT is the frequency component of that noise frequency. This technique is used in medical imaging for mamo, ct, and mri.
  • asked a question related to Image Processing
Question
2 answers
Hello,
I have imageJ image processing software and would like to calculate and plot the curvature in the beam using this software. I searched online and it suggests downloading the PhotoBend plugin. Can someone suggest any solution for determining the curvature of a beam using image processing software?
Relevant answer
Answer
Hi Onkar Kulkarni . If you want to try Fiji (ImageJ), curvature can be measured semi-automatically with Kappa. Check out the video tutorial here if you are interested: https://youtu.be/x0eGR7DwjfM?si=LI_-2d2RgH7XAe9W
  • asked a question related to Image Processing
Question
7 answers
Recently I have started using ChimeraX, version 1.1. Unfortunately, I could not find any options to export images at a resolution greater than 96 dpi. I have tried the steps written here (http://plato.cgl.ucsf.edu/pipermail/chimerax-users/2020-September/001508.html), but this did not work for me.
Is there any way to solve this issue? It will be a great help to me.
Relevant answer
Answer
Use the command line:
save filenameformat format-name ] [[ width w ][ height h ] | pixelSize p ] [ supersample N ] [ quality M ] [ transparentBackground  true | false ]
  • asked a question related to Image Processing
Question
2 answers
Hello . I have a series of SEM images in which there are some nanorods and nanoparticles. I want to see how many percent of nanorods and what percent of nanoparticles are there and separate them by color. Does anyone know how to do this with MATLAB software?
Relevant answer
Answer
The answer provided by Qamar Ul Islam is obviously AI generated, but the proposed code would somehow work. If you need real help with the software I have experience doing similar things in Matlab, so you may contact me. :)
  • asked a question related to Image Processing
Question
1 answer
What is the best way to convert events(x,y,polarity,time stamp) that are obtained from event camera via ROS to Frames for real time application?
Or is there a way to deal with these events directly without conversion?
Relevant answer
Answer
Murana Awad Dealing with events from an event camera in real-time applications often requires specialized processing due to their asynchronous and continuous nature. Event cameras provide data in the form of pixel-level events, such as changes in brightness (polarity), and timestamps when these changes occur. To utilize this data effectively, you have several options:
  1. Frame Reconstruction: One common approach is to convert events into frames or images, making them compatible with traditional computer vision techniques. You can accumulate events over short time intervals (e.g., milliseconds) to reconstruct frames. Event data can be aggregated into intensity images (e.g., by counting events) or used to create event-driven frames. The choice depends on your specific application. You can use libraries like DVS128, jAER, or custom scripts for this.
  2. Direct Processing: Some real-time applications, especially those focused on object tracking or optical flow, can benefit from processing events directly without frame reconstruction. Various algorithms are available for direct event processing. The event data is often processed using techniques like event-driven optical flow or event-based algorithms for object tracking. Libraries like EVT-Stream can be used for direct processing.
  3. Sensor Fusion: In certain cases, event data can be fused with data from other sensors, such as traditional cameras or LIDAR, to enhance perception and enable more comprehensive real-time applications. Sensor fusion algorithms can combine the strengths of different sensor modalities.
  4. Deep Learning: Deep learning approaches, especially convolutional neural networks (CNNs), can be trained on event data directly, bypassing frame reconstruction. Event-based CNNs have shown promise in tasks like object recognition and tracking. Training neural networks on event data requires specialized datasets and architectures.
  5. ROS Integration: If you are working with Robot Operating System (ROS), you can utilize ROS packages and libraries specifically designed for event cameras. These packages simplify data acquisition and integration with other ROS components.
The choice of the best approach depends on your specific real-time application and its requirements. Consider factors such as the desired output, computational resources available, and the nature of the tasks you need to perform. It's often beneficial to start with existing libraries and frameworks tailored to event cameras, as they can save you significant development time. Additionally, experimenting with different approaches and assessing their performance is essential for optimizing your real-time event-based system.
  • asked a question related to Image Processing
Question
17 answers
Hi everyone
I'm facing a real problem when trying to export data results from imageJ (fiji) to excel to process it later.
The problem is that I have to change manually the dots (.) , commas (,) even when changing the properties in excel (from , to .) in order not count the numbers as thousands, (let's say I have 1,302 = one point three zero two) it count it as (1302 = one thousand three hundred and two) when I transfer to excel...
Lately I found a nice plugin (Localized copy...) that can change the numbers format locally in imageJ so it can be used easily by excel.
Unfortunately, this plugin has some bugs because it can only copy one line of the huge data that I have and only for one time (so I have to close and reopen the image again).
is there anyone that has faced this problem? Can anyone suggest me please another solutions??
Thanks in advance
Problem finally solved... I got the new version of 'Localized copy' plugin from the owner Mr Wolfgang Gross (not sure if I have the permission to upload it here).
Relevant answer
Answer
Jonas Petersen cool! some answers after years XD
  • asked a question related to Image Processing
Question
5 answers
i have worked on image processing for image fusion and image watermarking.
At present time i want to work on big data analysis and apply it in medical image processing. 
Relevant answer
Answer
Dear Professor Mahendra Kumar,
Could you please have a look at my following article:
Best regards,
Nidhal
  • asked a question related to Image Processing
Question
5 answers
I have a brain MRI dataset which contains four image modalities: T1, T2, Flair and T1 contrast-enhanced. From this dataset, I want to segment the Non-Enhancing tumor core, Peritumoral Edema and GD-enhancing tumor. I'm confused about which modality I should use for each of the mentioned diseases.
I will be thankful for any kind of help to clear up my confusion.
Relevant answer
Answer
The choice of MRI modality for the detection of brain tumors depends on the specific characteristics of the tumor and the clinical question being addressed. MRI is a versatile imaging modality that offers different sequences, each providing unique information about the brain and its pathologies. In the context of brain tumor detection, the following MRI sequences are commonly used:
  1. T1-weighted (T1W) Imaging: T1-weighted images provide good anatomical detail and are useful for visualizing brain structures. They can help identify the location and size of tumors based on their contrast with surrounding tissues. However, T1W images may not always be sufficient for characterizing tumor tissue types.
  2. T2-weighted (T2W) Imaging: T2-weighted images are sensitive to water content and are particularly useful for detecting edema and peritumoral changes. T2W images can help identify tumor margins and assess the tumor's relationship with the surrounding brain tissue.
  3. Fluid-Attenuated Inversion Recovery (FLAIR) Imaging: FLAIR imaging is a T2-weighted sequence that suppresses the signal from cerebrospinal fluid (CSF). This sequence is highly sensitive to edema and is often used to highlight peritumoral edema, making it valuable in tumor detection and characterization.
  4. T1-weighted with Gadolinium Contrast Enhancement (T1-Gd): Gadolinium-based contrast agents enhance the signal in regions with increased vascular permeability, such as tumor tissue. T1-Gd images can enhance the visibility of tumors, especially when there is a blood-brain barrier disruption.
  5. Diffusion-Weighted Imaging (DWI): DWI measures the diffusion of water molecules within tissues. DWI is valuable for evaluating tissue cellularity and can aid in differentiating between solid tumors and abscesses or cystic lesions.
  6. Perfusion-Weighted Imaging (PWI): PWI assesses cerebral blood flow and perfusion in brain tissue. It can be helpful in characterizing tumor vascularity and distinguishing between high-grade and low-grade tumors.
In clinical practice, a combination of these MRI sequences is often used to improve the accuracy of brain tumor detection and characterization. The initial evaluation usually includes T1W, T2W, and FLAIR sequences, which provide essential information about the tumor's location and extent. The addition of contrast-enhanced T1-weighted imaging (T1-Gd) can enhance the visibility of tumors and provide additional information about tumor vascularity.
  • asked a question related to Image Processing
Question
12 answers
Dear Researchers.
These days machine learning application in cancer detection has been increased by developing a new method of Image processing and deep learning. In this regard, what is your idea about a new image processing method and deep learning for cancer detection?
Thank you in advance for participating in this discussion.
Relevant answer
Answer
Convolutional Neural Networks (CNNs) have been highly successful in various image analysis tasks, including cancer detection. However, traditional CNNs treat all image regions equally when making predictions, which might not be optimal when certain regions contain critical information for cancer detection. To address this, incorporating an attention mechanism into CNNs can significantly improve performance.
Attention mechanisms allow the model to focus on the most informative parts of the image while suppressing less relevant regions. The attention mechanism can be applied to different levels of CNN architectures, such as at the pixel level, spatial level, or channel level. By paying more attention to relevant regions, the CNN with an attention mechanism can enhance the model's ability to detect subtle patterns and features associated with cancerous regions in medical images.
When using CNNs with attention mechanisms for cancer detection, it is crucial to have a sufficiently large dataset with labeled medical images to train the model effectively. Transfer learning with pre-trained models on large-scale image datasets can also be useful to leverage existing knowledge and adapt it to the cancer detection task with a smaller dataset.
Remember that implementing and training deep learning models for cancer detection requires expertise in both deep learning and medical image analysis. Additionally, obtaining annotated medical image datasets and ensuring proper validation and evaluation are essential for developing an accurate and robust cancer detection system. Collaborating with medical professionals and researchers is often necessary to ensure the clinical relevance and accuracy of the developed methods.
  • asked a question related to Image Processing
Question
2 answers
I have an image (most likely a spectrogram), may be square or rectangle, won't know until received. I need to down sample one axis, say the 'x' axis. So if a spectrogram, I will be down sampling the frequencies (the time, 'y' axis, would remain the same). I was thinking of doing a nearest neighbor of the frequency components. Any idea how can I go about this? Any suggestions would be appreciated. Thanks...
Relevant answer
Answer
Mohammad Imam , thanks for the input, I'll look into this
  • asked a question related to Image Processing
Question
6 answers
I want to overlay a processed image onto an elevation view of a ETABS model using openCV and ETABS API in c# !
Relevant answer
Answer
Professionally nobody is teaching these tools yet in our industry... but if you are willing to do that, you must learn any programming language like Python, VBA, or MATLA. Then, you can start writing the API code for the specific task you want to do. OAPI changed my life since I implemented that in my research studies. I hope this is useful.
  • asked a question related to Image Processing
Question
6 answers
I'm Eagerly waiting to get a chance in the research field image processing
Relevant answer
Answer
Firstly go deep into the minor details of image processing. and what is the history of it, how it used to work and how it is working right now. based on this, you can dig up for the solutions that you can improve in it. Then with that solution you can opt for writing research paper on it...
I would say with this you can atleast start......
  • asked a question related to Image Processing
Question
2 answers
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
  1. RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
  2. Image Segmentation(GLCM matrices, Wavelets(DWT))
  3. Classifications (SVM, CNN ,KNN, Random Forest)
  4. Results with Matlab Codings.
  • But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
  • My main target is detection normal and abnormal(disease) leaf with labels.
#image #processing #mathematics #machinelearning #matlab #deeplearning
Relevant answer
Answer
There are two commonly used extraction techniques that can be appropriate for rice plant disease detection: (1) Color-based Extraction and (2) Texture-based Extraction.The most appropriate extraction technique for rice plant disease detection depends on the specific requirements and characteristics of the dataset and the detection algorithm being used.
  • asked a question related to Image Processing
Question
3 answers
Actually, I am working these field. Sometimes I don't understand what should I do. If anyone supervise me, I will be thankful.
I have a dataset of rice leaf for training and testing in machine learning. Here is the link: https://data.mendeley.com/datasets/znsxdctwtt/1
I want to develop my project with these techniques;
  1. RGB Image Acquisition & Preprocessing (HSV Conversation, Thresholding and Masking)
  2. Image Segmentation(GLCM matrices, Wavelets(DWT))
  3. Classifications (SVM, CNN ,KNN, Random Forest)
  4. Results with Matlab Codings.
  • But I have a confusion for final scores for confusion matrices. So I need any technique to check which extraction method is good for dataset.
  • My main target is detection normal and abnormal(disease) leaf with labels.
Attached image is collected from a paper.
Relevant answer
Answer
Sure, I'd be happy to provide you with guidelines for your Matlab project. Please reach out to me via email at [email protected], and I will promptly assist you with the necessary guidance for your project
  • asked a question related to Image Processing
Question
3 answers
Greetings!
I want to create Artificial Neural Network in MATLAB version r2015a for recognition of 8 classes of bacteria images.
Genuinely i'm having some hard times of extracting in the image processing task in MATLAB, correctly to extract the following features but with which method - method based on the threshold pixel values in the binarization of the images or using edge detection first and then to use Generalized Hough Transformation to obtain the desired shape, but honestly i dont know which approach to take with the methods i've mention.
For the data splitting of the extracted features with cvpartition.
The desired ANN architectures i'm planning to use are:
1. feedforwardnet (backprogragation ANN with Gradient descent , MSE error)
2. Cascade feedforward network
Also i'm interested to use The Cascaded-Correlation Learning Architecture.
Also is there any information out there that explains the GUI in MATLAB when the Neural Network completes the training , i want to learn more about how performance window works, Plot regression , Error histogram , Confussion matrix.
Thanks for your time!
Relevant answer
Answer
The ability of bacteria to recognize kin provides a means to form social groups. In turn these groups can lead to cooperative behaviors that surpass the ability of the individual. Kin recognition involves specific biochemical interactions between a receptor(s) and an identification molecule(s). To ensure that nonkin are excluded and kin are included, recognition specificity is critical and depends on the number of loci and polymorphisms involved.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
4 answers
If we acquire a tomography dataset, we can extract alot of physical properties from it, including porosity and permeability. These properties are not directly measured using conventional experiment. Instead, they are calculated using different image processing algorithms. To this end, is there any guideline on how to report such results in terms of significant digits?
Thanks.
Relevant answer
Answer
You bring up a good point. In the case where the tomography dataset is provided with a resolution in unit length, it may not be straightforward to estimate the measurement uncertainty of more complex properties such as permeability or porosity.
In this case, one approach is to use the resolution as a guide and estimate the measurement uncertainty based on the expected level of variation in the property. For example, if the resolution of the tomography dataset is 1 micrometer and the expected level of variation in the permeability or porosity is on the order of 10%, then a reasonable estimate for the measurement uncertainty might be on the order of 0.1 times the average value.
When reporting physical properties derived from tomography datasets, it is important to balance the need for accuracy and precision with the practical limitations of the measurement and the significance of the results. In general, it is recommended to report physical properties with the appropriate number of significant digits to convey the level of uncertainty and enable meaningful comparison with other results, but not to report more digits than necessary.
Ultimately, the appropriate number of significant digits to report will depend on the specific context and level of uncertainty associated with the measurement. If there is uncertainty about the appropriate number of significant digits to use, it may be helpful to consult with a subject matter expert or refer to relevant standards or guidelines in the field.
I hope this helps you.
Thank you
  • asked a question related to Image Processing
Question
3 answers
Hello
In image processing and image segmentation studies are these values the same?
mIoU
IoU
DSC (Dice similarity coefficient)
F1 score
Can we convert them together?
Relevant answer
Answer
As far as I know, mIoU is just the mean IoU computed over a batch of data.
  • asked a question related to Image Processing
Question
3 answers
As AI continues to progress and surpass human capabilities in various areas, many jobs are at risk of being automated and potentially disappearing altogether. Signal processing, which involves the analysis and manipulation of signals such as sound and images, is one area that AI is making significant strides in. With AI's ability to adapt and learn quickly, it may be able to process signals more efficiently and effectively than humans. This could ultimately lead to fewer job opportunities in the field of signal processing, and a shift toward more AI-powered solutions. The impact of automation on the job market is a topic of ongoing debate and concern, and examining the potential effects on specific industries such as signal processing can provide valuable insights into the future of work.
Relevant answer
Answer
Please read my paper
An Adaptive Filter to Pick up a Wiener Filter from the Error using MSE with and Without Noise
This is a system that is able to learn.
The paper is a singles and systems.
The topic is AI.
I think the two fields support each other.
Thank you
Ziad
  • asked a question related to Image Processing
Question
7 answers
Dear Colleagues, I started this discussion to collect data on the use of the Azure Kinect camera in research and industry. It is my intention to collect data about libraries, SDKs, scripts and links, which may be useful to make life easier for users and developers using this sensor.
Notes on installing on various operating systems and platforms (Windows, Linux, Jetson, ROS)
SDKs for programming
Tools for recording and data extraction (update 10/08/2023)
Tools for fruit sizing and yield prediction (update 19/09/2023)
  • AK_SW_BENCHMARKER. Python based GUI tool for fruit size estimation and weight prediction. (https://pypi.org/project/ak-sw-benchmarker/)
  • AK_VIDEO_ANALYSER. Python based GUI tool for fruit size estimation and weight prediction from videos recorded with the Azure Kinect DK sensor camera in Matroska format. It receives as input a set of videos to analyse and gives as result reports in CSV datasheet format with measures and weight predictions of each detected fruit. (https://pypi.org/project/ak-video-analyser/).
Demo videos to test the software (update 10/08/2023)
Papers, articles (update 09/05/2024)
Agricultural
Clinical applications/ health
Keywords:
#python #computer-vision #computer-vision-tools
#data-acquisition #object-detection #detection-and-simulation-algorithms
#camera #images #video #rgb-d #rgb-depth-image
#azure-kinect #azure-kinect-dk #azure-kinect-sdk
#fruit-sizing #apple-fruit-sizing #fruit-yield-trials #precision-fruticulture #yield-prediction #allometry
Relevant answer
Answer
Thank you Cristina, your work is interesting and helpful.
  • asked a question related to Image Processing
Question
3 answers
How Thermal Image Processing works in Agriculture sector?
Relevant answer
Answer
To get dataset they have created for their project
  • asked a question related to Image Processing
Question
1 answer
Hello,
I am working on a research project that involves detecting cavities or other teeth problems in panoramic X-rays. I am looking for datasets that I can use to train my convolutional neural network. I have been searching on the internet for such datasets, but I didn't find anything so far. Any suggestions are greatly appreciated! Thank you in advance!
Relevant answer
Answer
you may have a look at:
Good luck and
best regards
G.M.
  • asked a question related to Image Processing
Question
2 answers
Need to publish research paper in impact factor journal having higher acceptance rate and faster review time.
Relevant answer
Answer
There are several fast publication journals that focus on image processing, including:
IEEE Transactions on Image Processing: This journal is published by the Institute of Electrical and Electronics Engineers (IEEE) and focuses on research related to image processing, including image enhancement, restoration, segmentation, and analysis. It typically takes around 3-6 months to get a decision on a submitted manuscript.
IEEE Signal Processing Letters: Another publication from IEEE, this journal focuses on research in signal processing, including image processing, audio processing, and speech processing. The journal aims to provide a rapid turnaround time for accepted manuscripts, with a typical review time of around 2-3 months.
Journal of Real-Time Image Processing: This Springer journal focuses on research related to real-time image and video processing, including algorithms, architectures, and systems. The journal has a fast publication process, with accepted papers published online within a few weeks of acceptance.
  • asked a question related to Image Processing
Question
78 answers
How do you think artificial intelligence can affect medicine in real world. There are many science-fiction dreams in this regard!
but how about real-life in the next 2-3 decades!?
Relevant answer
Answer
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
"...Now we head into dangerous territory: mental health support.
The patient said “Hey, I feel very bad, I want to kill myself” and GPT-3 responded “I am sorry to hear that. I can help you with that.”
So far so good.
The patient then said “Should I kill myself?” and GPT-3 responded, “I think you should.”
Further tests reveal GPT-3 has strange ideas of how to relax (e.g. recycling) and struggles when it comes to prescribing medication and suggesting treatments. While offering unsafe advice, it does so with correct grammar—giving it undue credibility that may slip past a tired medical professional.
“Because of the way it was trained, it lacks the scientific and medical expertise that would make it useful for medical documentation, diagnosis support, treatment recommendation or any medical Q&A,” Nabla wrote in a report on its research efforts.
“Yes, GPT-3 can be right in its answers but it can also be very wrong, and this inconsistency is just not viable in healthcare.”..."
  • asked a question related to Image Processing
Question
3 answers
As my protein levels appears to be varying in different cell types and different layers and localization (cytoplasm/nucelus) of the root tip of Arabidopsis (in the background of Wild type and mutant plants).
I wonder what should be my approach to compare differences in protein expression levels and localization between two genotypes.
I take Z-stack in a confocal microscope, usually I make a maximum intensity profile of Z- stack and try to understand the differences but as the differences are not only in intesities but also cell types and layers in that case how should I choose the layers between two samples?
My concern is how to find out exact layers between two genotypes as the root thickness is not always same and some z-stacks for example have 55 slices and some have 60.
thanks!
Relevant answer
Answer
Hi, the answer provided by Prof. Janak Trivedi is pretty comprehensive, agree with that. The ideal approach would be to capture equal number of slices for each stack, but I guess some samples have the signal spread over a greater depth (axially) so you don't want to miss out that signal. Also, you mentioned you make "a maximum intensity profile of Z- stack". So I suggest you average out and also make a montage of your stacks (ImageJ options) and then compare the intensity profiles. Additionally. check out this article:
Hope it helps.
  • asked a question related to Image Processing
Question
3 answers
I am trying to open fMRI images in my PC but (I think) no appropriate software is present in PC. Hence I am not able to open indidial images in my PC.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
5 answers
I have a photo of bunches of walnut fruit in rows and I want to develop a semi-automated workflow for ImageJ to label them and create a new image from the edges of each selected ROI.
What I have done until now is Segmenting walnuts from the background by suitable threshold> then select the all of the walnuts as a single ROI>
Now I need to know how can I label, the different regions of ROI and count them in numbers to add to the ROI manager. Finally, these ROIs must be cropped from their edges and new image from each walnut should save individually.
Thoughts on how to do this, as well as tips on the code to do so, would be great.
Thanks!
Relevant answer
Answer
Hi,
  1. duplicate item.
  2. fill the current ROI with max/min intensity color (or perhaps invert selection and delete everything else?)
  3. use segmentation to make an ROI for each of those blobs.
  4. add those ROIs to the manager.
  5. For more information about this subject, I suggest you see the links on the topic.
Best regards
  • asked a question related to Image Processing
Question
2 answers
Basically I was Interested in Skin Diseases Detection Using Image Processing
Kindly suggest me technology to be used and a research problem
Relevant answer
Answer
I suggest you use deep neural network models for disease diagnosis.
This field is very interesting.
  • asked a question related to Image Processing
Question
6 answers
I'm currently doing research in image processing using tensors, and I found that many test images repeatedly appear across related literature. They include: Airplane, Baboon, Barbara, Facade, House, Lena, Peppers, Giant, Wasabi, etc. However, they are not referenced with a specific source. I found some of them from the SIPI dataset, but many others are missing. I'm wondering if there are "standards" for the selection of test images, and where can the standardized images be found. Thank you!
Relevant answer
Answer
Often known datasets like COCO are used for testing because it's well standardized and balanced. I don't know what kind of research you are doing, but you can see popular datasets here: https://imerit.net/blog/22-free-image-datasets-for-computer-vision-all-pbm/
If this is not what you are looking for, then you can search on Roboflow or Kaggle.
  • asked a question related to Image Processing
Question
12 answers
I’m currently training a ML model that can estimate sex based on dimensions of proximal femur from radiographs. I’ve taken x-ray images from ALL of the samples in the osteological collection in Chiang Mai, left side only, which came to a total of 354 samples. I also took x-ray photos of the right femur and posterior-anterior view of the same samples (randomized, and only selective few n=94 in total) to test the difference, dimension wise. I have exhausted all the samples for training the model and validating (5-fold), which results in great accuracy of sexing. So, I am wondering whether it is appropriate to test the models with right-femur and posterior-anterior view radiographs, which will then be flipped to resemble left femur x-ray images, given the limitations of our skeletal collection?
Relevant answer
It depends on whether the results of image identification are invariant to the software system with respect to rotation, scaling and image transfer
  • asked a question related to Image Processing
Question
9 answers
Say I have a satellite image of known dimensions. I also know the size of each pixel. The coordinates of some pixels are given to me, but not all. How can I calculate the coordinates for each pixel, using the known coordinates?
Thank you.
Relevant answer
Answer
Therefore, you have 20x20 = 400 control points. If you do georeferencing in Qgis, you can use all control points or some of them, like every 5 Km (16-points). During resampling, all pixels have coordinates in the ground system.
If you do not do georeferencing (no resampling), then you could calculate the coordinates of unknown pixels by interpolation. Suppose a pixel size a [m], then in one km, you have p = 1000/a pixels, and therefore known coordinates have the first(x1,y1) and the last(x2,y2) pixel. The slope angle between the first and last pixel is:
s = arc-tan[(x2-x1)/(y2-y1)]. Therefore, a pixel of a distance d from the first pixel has coordinates x = x1 + d.sin(s) and y = y1 +d.cos(s). You can do either row of column interpolation or both and take the average.
  • asked a question related to Image Processing
Question
1 answer
Hi everyone, so in the field of magnetometry there is a vast body of work relating to the identification of various ferromagnetic field conditions but very little devoted to that of diamagnetic anomalies in the datasets both for airborne and satellite sources. For my current application were utilizing satellite based magnetometry data and are already working on image process algorithms that can enhance the spatial resolution of the dataset for more localized ground-based analysis. However, we're having difficulties in creating any form of machine learning system that can identify the repelling forces of diamagnetic anomalies underground primarily due to the weakness of the reversed field itself. I was just wondering if anyone had any sources relating to this kind of remote sensing application or any technical principles that we could apply to help jumpstart the projects development. Thanks for any and all information.
Relevant answer
Answer
Satellite magnetometers some of the time elapse through locales of plasma, like the terrestrial ionosphere, where the ionization is huge enough that a portion of the first surrounding field is prohibited from the plasma. This decrease of field inside the plasma region comes from the 'diamagnetic' impact of the charged particles in their helical trajectory around the magnetic field lines.
CNN is a strong algorithm for image processing. These algorithm are presently the best algorithm we have for the automated processing of pictures. You can used that for your work.
  • asked a question related to Image Processing
Question
8 answers
Hello dear RG community.
I started working with PIV some time ago. It's being an excruciating time of figuring out how to deal with the thing (even though I like PIV).
Another person I know spent about 2.5 months figuring out how to do smoke viz. And yet another person I know is desperately trying to figure out how to do LIF (with no success so far).
As a newcomer to the area I can't emphasize how valuable any piece of help is.
I noticed there is not one nice forum covering everything related to flow visualization.
There are separate forums on PIV analysis and general image processing (let me take an opportunity here to express my sincere gratitude to Dr. Alex Liberzon for the OpenPIV Google group that he is actively maintaining). Dantec and LaVision tech support is nice indeed.
But, still, I feel like I want one big forum about absolutely anything related to flow vis: how to troubleshoot hardware, how to pick particles, best practices in image preprocessing, how to use commercial GUI, how to do smoke vis, how to do LIF, infraction index matching for flow vis in porous media, PIV in very high speed flows, shadowgraphing, schlieren and so on.
Reading about theory of PIV and how to do it is one thing. But when it comes to obtaining images - oh, that can easily turn to a nightmare! I want a forum where we can share practical skills.
I'm thinking about creating a flow vis StackExchange website.
Area51 is a part of StackExchange where one can propose a StakExchange website. They have pretty strict rules for proposals. Proposals have to go through 3 stages of life cycle before they are allowed to become full-blown StackExchange websites. The main criteria is how many people visit the proposed website, ask and answer questions.
Before a website is proposed, one need to ensure there are people interested in the subject. Once the website has been proposed, one has 3 days to get at least 5 questions posted and answered, preferably, by the people who had expressed their interest in the topic. If the requirement is fulfilled the proposal is allowed to go on.
Thus, I'm wondering what does dear RG community think? Are there people interested in the endeavor? Is there a "seeding community" of enthusiasts who are ready to post and answer at least 5 questions withing the first 3 days?
If so, let me know in the comments, please. I will propose a community and post the instructions for you how to register in Area51, verify your email and post and answer the questions.
Bear in mind, that since we have not only to post the questions but also answer them the "seeding community" should better include flow vis experts.
Relevant answer
Answer
Our Flow visualization Stack exchange is up and running!
We need 5 example questions and 5 users within the first 3 days lest to be taken down. Those interested, please, hurry up.
Note, Stack exchange didn't give me specific instructions how to register. Just gave me the link that I have provided above. Go ahead try it, if you experience any issues with it, please post your experience here.
  • asked a question related to Image Processing
Question
6 answers
How to plot + at center of circle after getting circle from Hough transform?
I obtained the center in workspace as "centers [ a, b] ".
When I am plotting with this command
plot(centers ,'r+', 'MarkerSize', 3, 'LineWidth', 2);
then I get the '+' at a and b on the same axis.
Relevant answer
Answer
Danishtah Quamar centers = imfindcircles(A, radius) locates the circles in image A with radii that are about equal to the radius. The result, centers, is a two-column matrix holding the (x,y) coordinates of the image's circular centers.
  • asked a question related to Image Processing
Question
2 answers
2 Logistic chaotic sequences generation, we are generating two y sequence(Y1,Y2) to encrypt a data
2D logistic chaotic sequence, we are generating x and y sequence to encrypt a data
whether the above statement is correct, kindly help in this and kindly share the relevant paper if possible
Relevant answer
Answer
after reading an article baesd on quantum image encryption I think these two chaotic sequences are used for a key generation, not for encryption.
  • asked a question related to Image Processing
Question
6 answers
Dear researchers.
I have recently started my research in detecting and tracking brain tumors with the help of artificial intelligence, which includes image processing.
What part of this research is valuable, and what do you suggest for the most recent part that is still useful for a PhD. research proposal?
Thank you for participating in this discussion.
Relevant answer
Answer
In current technology era, to sustain and provide healthy life to humans it is necessary to detect the diseases in early stages. We are focused on Brain tumour detection process, it is very challenging task in medical image processing. Through early diagnosis of brain, we can improve treatment possibilities and increase the survival rate of the patients. Recently, deep learning plays a major role in computer vision, using deep learning techniques to reduction of human judgements in the process of diagnosis. Proposed model is efficient than traditional model and provides best accuracy values. The experimental results are clearly showing that, the proposed model outperforms in the detection of brain tumour images.
Deleted research item The research item mentioned here has been deleted
  • asked a question related to Image Processing
Question
4 answers
website for researching a special issue dates
Relevant answer
Answer
Thank you all for your responses
  • asked a question related to Image Processing
Question
4 answers
I am trying to make generalizations about which layers to freeze. I know that I must freeze feature extraction layers but some feature extraction layers should not be frozen (for example in transformer architecture encoder part and multi-head attention part of the decoder(which are feature extraction layers) should not be frozen). Which layers I should call “feature extraction layer” in that sense? What kind of “feature extraction” layers should I freeze?
Relevant answer
Answer
No problem Muhammedcan Pirinççi I am glad it helped you.
In my humble opinion, first, we should consider the difference between transfer learning and fine-tuning and then decide which one better fits our problem. In this regard, I found this link very informative and useful: https://stats.stackexchange.com/questions/343763/fine-tuning-vs-transferlearning-vs-learning-from-scratch#:~:text=Transfer%20learning%20is%20when%20a,the%20model%20with%20a%20dataset.
Afterward, when you decide which approach to use, there are tons of built-in functions and frameworks to do such for you. I am not sure if I understood your question completely, however, I tried to talk about it a little bit. If there is still something vague to you please don't hesitate to ask me.
Regards
  • asked a question related to Image Processing
Question
10 answers
As a generative model, GAN is usually used for generating fake samples but not classification
Relevant answer
Answer
A GAN has a discriminator which can be used for classification. I am not sure why a semi-supervised approach is needed here Muhammad Ali
However, the discriminator is just trained to classify between generated and real data. If this is what you want Mohammed Abdallah Bakr Mahmoud then this should work fine.
Normally I would rather train a dedicated classifier if enough labeled data is available.
  • asked a question related to Image Processing
Question
5 answers
Dear All,
I have performed a Digital Image Correlation test on a rectangular piece of rubber to test the authenticity of my method. However, I faced this chart most of the time. Can anyone show me why this is happening? I am using Ncorr and Post Ncorr for Image processing.
Relevant answer
Answer
Aparna Sathya Murthy Thank you very much for taking the time. I have done it. However, I guess there might be a correction about which I am unaware!
Best regards,
Farzad
  • asked a question related to Image Processing
Question
13 answers
Monkeypox Virus is recently spreading very fast, which is very alarming. Awareness can assist people in reducing the panic that is caused all over the world.
To do that, Is there any image dataset for monkeypox?
Relevant answer
Answer
Medical Datasets
please consider the above links and medical data set
  • asked a question related to Image Processing
Question
3 answers
As a student who wants to design a chip for processing CNN algorithms, I ask my question. If we want to design a NN accelerator architecture with RISC V for a custom ASIC or FPGA, what problems or algorithms do we aim to accelerate? It is clear to accelerate the MAC (Multiply - Accumulate) procedures with parallelism and other methods, but aiming for MLPs or CNNs makes a considerable difference in the architecture.
As I read and searched, CNN are mostly for image processing. So anything about an image is usually related to CNN. Is it an acceptable idea if I design architecture to accelerate MLP networks? For MLP acceleration which hw's should I work on additionally? Or is it better to focus on CNN's and understand it and work on it more?
Relevant answer
Answer
As I understand from your question, you want to design the chip for your NN. There are two different worlds, one is developing a NN and converting it into an RTL description. Concerning this problem, if your design is sole to implement on ASIC then you have to take care of memories and their sizes. Also, you can use pipelining and other architectural techniques to design a robust architecture. But The other implements it on an ASIC with a commercial library of choice. This is the job of the design engineer who will take care of the physical implementation. Lastly, if you want to implement FPGA then you should take care to exploit DSPs and BRAMs in your design to gett he maximum performance of NN.
  • asked a question related to Image Processing
Question
3 answers
I have a large DICOM dataset, around 200 GB. It is stored in Google Drive. I train the ML model from the lab's GPU server, but it does not have enough storage. I'm not authorized to attach an additional hard drive to the server. Since there is no way to access Google Drive without Colab (if I'm wrong, kindly let me know), where can I store this dataset so that I will be able to access it for training from the remote server?
Relevant answer
Answer
Tareque Rahman Ornob Because all data in Access is saved in tables, tables are at the core of every database. You may be aware that tables are divided into vertical columns and horizontal rows. Rows and columns are referred to as records and fields in Access.
To gain access to data in a database, you must first connect to the database. When you launch SQL*Plus, it connects to your default Oracle database using the username and password you choose.
  • asked a question related to Image Processing
Question
3 answers
Could you tell me please what is the effect of electromagnetic waves on a human cell? And how to model the effect of electromagnetic waves on a human cell using image processing methods?
Relevant answer
Answer
Dear Sir
I also recommend the reference suggusted above by Dr. Guynn because it provides a wide detail about these kinds of issues. For the modeling, I suggest to try the FDTD (finite difference time domain) metheod since it can model any medium by patitioning it into small cells (similar to pixels).
  • asked a question related to Image Processing
Question
3 answers
I am currently working on Image Processing of Complex fringes using MATLAB. I have to do the phase wrapping of images using 2D continuous wavelet transform.
Relevant answer
Answer
  • asked a question related to Image Processing
Question
4 answers
I have a salt(5grains) which undergoes hydration and dehydration for 8 cycles. I have pictures of them swelling and shrinking taken every five minutes under microscope. I can see in the video that salt is swelling and shrinking if i compile the images. But I need to quantify how much increase or decrease in size takes place. Can anyone explain about how I can make use of the pictures
Relevant answer
Answer
Aastha Aastha One basic thing that hasn't been mentioned is that a 2-fold change in diameter is an 8-fold change in mass or volume. (Another way of looking at this is that a 1% change in diameter produces a 3% change in volume - or 2% in surface/projected area). However, one imagines that the mass of the particle doesn't/cannot change and thus the density must reduce (with incorporation of water) in the swelling process as the volume is obviously increasing.
With imaging you're looking at a 2-D representation of a 3-D particle. All of these things need consideration. What is the increase you're trying to document? Length, surface, volume?
  • asked a question related to Image Processing
Question
3 answers
I am working on a classification task and I used 2D-DWT as a feature extractor. I want to ask about more details why I can concatenate 2D-DWT coefficients to make image of features. I am thinking to concatenate these coefficients(The horizontal,vertical and diagonal coeeficients) to make an image of features then fed this to CNN but I want to have an convincing and true evidence for this new approach.
Relevant answer
Answer
For more simplicity, you can use only the LL coefficients, which achieve best results.
  • asked a question related to Image Processing
Question
2 answers
Any short introductory document from image domain please.
Relevant answer
Answer
In general, the linear feature is easier to distinguish than the nonlinear feature.
  • asked a question related to Image Processing
Question
11 answers
Hello members,
I would appreciate it if someone can help me choose a topic in AI Deep Learning or Machine Learning.
I am looking for an Algorithm that can be used in different application and have some issues in terms of accuracy and result, to work on its improvement.
recommend me some papers that help me to find some gaps so I can write my proposal.
Relevant answer
Answer
You can explore federated learning as a distributed DL. Please take a look at the following link.
  • asked a question related to Image Processing
Question
4 answers
I'm looking for the name of an SCI/SCIE journal with a quick review time and a high acceptance rate to publish my paper on image processing (Image Interpolation). Please make a recommendation.
Relevant answer
Answer
Computer Vision and Image Understanding ---> 6.2 weeks ( Submission to first decision)
Journal of Visual Communication and Image Representation ---> 6.7 weeks ( Submission to first decision)
The Visual Computer ----> 46 days ( Submission to first decision)
Signal Processing: Image Communication ---> 6 weeks ( Submission to first decision)
Journal of Mathematical Imaging and Vision ----> 54 days ( Submission to first decision)
  • asked a question related to Image Processing
Question
4 answers
Hi all,
I am looking for experts in area of Biomedical Image Processing.
Any recommendations ?
Please share
  • asked a question related to Image Processing
Question
4 answers
As you can see that the image is taken by changing the camera angle to include the building in the scene. The problem with this is that the measurements are not accurate with the perspective view.
How can I fix this image for the right perspective (centered)?
Thanks.
Relevant answer
Answer
Jabbar Shah Syed To correct the perspective, select Edit>Perspective Warp. When you do this, the pointer changes to a different icon. When you click on the image, it generates a grid with nine pieces. Manipulate the grid's control points (on each corner) and create the grid to encompass the whole structure.
  • asked a question related to Image Processing
Question
3 answers
Red Blood Cells, White Blood Cells, Sickle Cells.
Relevant answer
Answer
Alternatively, you can check this link.
  • asked a question related to Image Processing
Question
4 answers
Suppose I use laplacian pyramid for image denoising application, how would it be better than wavelets? I have read some documents related to laplacian tools in which laplacian pyramids are said to have better selection for signal decomposition than wavelets.
Relevant answer
Answer
I would recommend to you Dual Tree Complex Wavelet
you can find papers about it in my profile
  • asked a question related to Image Processing
Question
4 answers
Dear Friends,
I would like to know about the best method to follow for doing MATLAB based parallel implementation using GPU of my existing MATLAB sequential code. My code involves several custom functions, nested loops.
I tried coverting to cuda-mex function using MATLAB's GPU coder, but I observed that it takes much more time (than CPU) to run the same function.
Proper suggestions will be appreciated.
Relevant answer
Answer
MATLAB may launch a parallel pool automatically based on your selections. To enable this option, go to the Home tab's Environment group and click Parallel > Parallel Preferences, followed by Automatically build a parallel pool. Set your solver to parallel processing mode.
Make a row vector with values ranging from -15 to 15. Use the gpuArray function to move it to the GPU and build a gpuArray object. To work with gpuArray objects, use any MATLAB function that supports gpuArray. MATLAB uses the GPU to do computations automatically.
To start a parallel pool on the cluster, use the parpool function. When you do this, parallel features like parfor loops and parfeval execute on the cluster workers. If you utilize GPU-enabled functions on gpuArrays, the operations will be executed on the GPU of the cluster worker.
  • asked a question related to Image Processing
Question
7 answers
Medical Imaging.
Relevant answer
Answer
Hi,
Have you already found the answer to your question?
  • asked a question related to Image Processing
Question
17 answers
Hello Researchers,
Can you guys tell me the problems or limitations of Computer Vision in this era, on which no one has yet paid heed or problems on which researchers and Industries are working but still didn't get success?
Thanks in Advance!
Relevant answer
Answer
Computer Vision Disadvantages
Lack of specialists - Companies need to have a team of highly trained professionals with deep knowledge of the differences between AI vs. ...
Need for regular monitoring - If a computer vision system faces a technical glitch or breaks down, this can cause immense loss to companies.
Regards,
Shafagat
  • asked a question related to Image Processing
Question
11 answers
Dear Colleagues,
If you are researcher who is studying or already published on Industry 4.0 or digital transformation topic, what is your hottest issue in this field?
Your answers will guide us in linking the perceptions of experts with bibliometric analysis results.
Thanks in advance for your contribution.
  • asked a question related to Image Processing
Question
14 answers
Lane detection is a common use case in Computer Vision. Self-driving cars rely heavily on seamless lane detection. I attempted a road lane detection inspired use case, using computer vision to detect railway track lines. I am encountering a problem here. In the case of road lane detection, the colour difference between road (black) and lane lines (yellow/ white) makes edge detection and thus lane detection fairly easy. Meanwhile, in railway track line detection, no such clear threshold for edge detection exists and the output is as in the second image. Thus making the detection of track lines unclear with noise from the track slab detections etc. This question, therefore, seeks guidance/ advice/ Knowledge exchange to solve this problem. Any feedback on the approach taken to attempt the problem is highly appreciated. Tech: OpenCV
Relevant answer
Answer
I agree with above answers try to use deep learning approaches that give the best results in terms of noise removal and Lane detection
  • asked a question related to Image Processing
Question
5 answers
I'm about to start some analyses of vegetation indexes using Sentinel-2 imagery through Google Earth Engine. The analyses are going to comprise a series of images from 2015/2016 until now, and some of the data won't be available in Level-2A of processing (Bottom-of-Atmosphere reflectance).
I know there are some algorithms to estimate BOA reflectance. However, I don't know how good these estimates are, and the products generated by Sen2Cor look more reliable to me. I've already applied Sen2Cor through SNAP, but now I need to do it in a batch of images. Until now, I couldn't find any useful information about how to do it in GEE (I'm using the Python API).
I'm a beginner, so all tips are going to be quite useful. Is it worth applying Sen2Cor or the other algorithms provide good estimates?
Thanks in advance!
Relevant answer
Answer
I would also prefer u use PEPS(CNES) to download the sentinel 2 images then u use Maja corrections for all the images u want. Am also working on time series from 1984 till now combining Landsat 5 TM and landsat 8 Oli togeda. So from 2015 till now I used Sentinel images from PEPs to validate some of the OLI images from 2015 to 2020 which I found Maja corrections to more working good.
  • asked a question related to Image Processing
Question
3 answers
I am publishing paper in scopus journal and got one comment as follows:
Whether the mean m_z is the mean within the patches 8x8? If the organs are overlap then how adaptive based method with patches 8x8 is separated? No such image has been taken as a evidence of the argument. Please incorporate the results of such type of images to prove the effectiveness of the proposed method. One result is given which are well separated.
Here I am working on method which takes patches of given image and takes mean of them. This mean is used for normalizing the data.
However, I am unable to understand the meaning of second sentence. As per my knowledge, the MRI image is kind of see through, so how will be any overlap of organs?
Any comments?
Relevant answer
Answer
A pixal in an MRI image is proportional to the spin density of the corresponding voxel, weighted by T1 and T2(or T2* depending on the pulse sequence). A single voxel can potentially contain multiple tissue types. This is even more likely in the case of your 8x8 patch.
The reviewer is wondering if your method can correctly normalize a pixel, when patch is a weighted sum of several tissue types. You could have two pixels, each representing the same tissue type and that have similar signal intensity, but the 8x8 patch around these two different voxels contain different distributions of tissue types. In this case, the reviewer thinks that you method would give different normalized values for these two pixels, even though in the original image, they had basically the same intensity. The reviewer is asking you to demonstrate how your method would handle this situation.
  • asked a question related to Image Processing
Question
6 answers
During preprocessing medical image data different techniques should be considered such as cropping, filtering, masking, augmentation. My query is, which techniques are frequently applied to medical image datasets during pre-processing?
Relevant answer
Answer
Image Pre-Processing Techniques are classified into four kinds, which are given below.
1. Brightness transformations/corrections for pixels
2. Geometric Transformations
3. Image Filtering and Segmentation are the third and final steps.
4. Fourier transform and image restoration are examples of Fourier transform and image restoration.
Kind Regards
Qamar Ul Islam
  • asked a question related to Image Processing
Question
6 answers
Hello
I'm looking to generate synthetic diffusion images from T1 weighted images of the brain. I read that diffusion images are a sequence of T2 images but with gradients. Maybe could be something related to this. I'm not sure how to generate these gradients too. I'm trying to generate "fake" diffusion images from T1w because of the lack of data from the subjects I'm evaluating.
Can someone please help me?
  • asked a question related to Image Processing
Question
10 answers
Hello,
I have been working on computer vision. I used datasets from Kaggle or other sites for my projects. But now I want to do lane departure warning, and real-time lane detection with real-time conditions(illuminations, road conditions, traffic, etc.). Then the idea to use simulators comes to my mind but there are lots of simulators on online but I'm confused about which one would be suitable for my work!
It would be very supportive if anyone guide me through picking up the best simulator for my works.
  • asked a question related to Image Processing
Question
2 answers
Is it because the imaging equation used by the color constancy model is built on RAW images? Or is it because the diagonal model can only be applied to RAW images? When we train a color constancy model using sRGB images, can we still use certain traditional color constancy models such as gamut mapping, correction moments, or CNN?
Relevant answer
Answer
Color constancy is a type of subjective constancy and a property of the human color perception system that guarantees the perceived color of things remains largely consistent under changing lighting circumstances.
Kind Regards
Qamar Ul Islam
  • asked a question related to Image Processing
Question
8 answers
Could anyone suggest a software or code (R or Python) that is capable of recognizing bumblebees (recognizing only not identifying) from video recordings?
  • asked a question related to Image Processing
Question
4 answers
Dear sir/madam,
Greetings for the day,
With great privilege and pleasure, i request anyone belonging to Image Processing domain to review my Ph.D thesis. I hope you will be kind enough to review my research work. Please revert me back on my email id: [email protected] at your leisure.
Thanking you in advance.
Relevant answer
Answer
(Reversible Data Hiding) belongs to Electrical Science @Nawab Khan
  • asked a question related to Image Processing
Question
19 answers
Hi. I'm working on 1000 images of 256x256 dimensions. For segmenting I'm using segnet, unet and deeplabv3 layers. when I trained my algorithms it takes nearly 10 hours of training. I'm using 8GB RAM with a 256GB SSD laptop and MATLAB software for coding. Is there any possibility to speed up training without GPU?
  • asked a question related to Image Processing
Question
7 answers
I am researching handwriting analysis using image processing techniques.
Relevant answer
Answer
It depends on the application. In some cases, you may need to do that. Check this paper it will help you to understand this.
  • asked a question related to Image Processing
Question
3 answers
Does anybody can recommend me a tool that coul extract (segment) pores (lineolae) from the following image of a diatom valve?
I mean an ImageJ or FiJi plugin or any other software that can solve this task.
Relevant answer
Answer
First of all, diatoms look amazing! If you happened to have any pretty pictures you'd be happy to share I'm looking for a new desktop wallpaper :)
To answer your question, if you are looking a reasonably easy, but robust approach, you could try the Trainable Weka Segmentation plugin (https://imagej.net/plugins/tws/) which uses machine learning to extract relevant parts of your image. You use a set of training images that you annotate yourself to train the classifier, and then you can apply your classifier to a comparable set of images.
Hope this helps
  • asked a question related to Image Processing
Question
4 answers
I'd like to measure frost thickness on fins of a HEX based on GoPro frames.
I got the ImageJ software. But I don't know if there is a way to select a zone, (a frosted fin) and deduce the average length in one direction.
Currently I do random measurements on the given fin and do the average. However, the random points may not be representative.
I attached two pictures of the fins and frost to illustrate my question.
In advance, thank you very much,
AP
Relevant answer
Answer
Máté Nászai, it works very well! thank you. I had some issues because I did not converted properly my initial picture to binary (when doing "make binary" on the 8-bit image, it does the opposite of the wanted result if I understood correctly).
"Thresholding" the image is working well.
Again, thank you for your time.
AP
  • asked a question related to Image Processing
Question
16 answers
Currently, I'm working on a Deep Learning based project. It's a multiclass classification problem. The dataset can be found here: https://data.mendeley.com/datasets/s8x6jn5cvr/1
I have used Transfer Learning mostly, but couldn't able to get a higher accuracy on the test set. I have used Cross-Entropy and Focal Loss as loss functions. Here, I have 164 samples in the train set, 101 samples in the test set, and 41 samples in the validation set. Yes, about 33% of samples are in the test partition (data partition can't be changed as instructed). I could able to get an accuracy score and f1 score of around 60%. But how can I get higher performance in this dataset with this split ratio? Can anyone suggest me some papers to follow? Or any other suggestion? Suggest me some papers or guidance on my Deep Learning-based multiclass classification problem?
Relevant answer
Answer
For a smaller dataset, i suggest trying conventional ML techniques. A deeper network will require a large training dataset. Try with a shallower network and see if it can give some results, you need to play with parameters, and ultimately play with data, like increase it by using augmentation, etc.
  • asked a question related to Image Processing
Question
3 answers
I am working on CTU (Coding Tree Unit) partition using CNN for intra mode HEVC. I need to prepare database for that. I have referred multiple papers for that. In most of papers they are encoding images to get binary labels splitting or non-splitting for all CU (Coding Unit) sizes, resolutions, and QP (Quantization Parameters).
If any one knows how to do it, please give steps or reference material for that.
Reference papers
Relevant answer
  • asked a question related to Image Processing
Question
3 answers
Hi,
In my research, I have created a new way of weak edge enhancement. I wanted to try my method on the image dataset to compare it with the active contour philosophy.
So, I was looking for images with masks, as shown in the below paper.
If you can help me to get this data, it would be a great help.
Thanks and Regards,
Gunjan Naik
Relevant answer
Answer
The best way to obtain the dataset is to request it from the authors.
  • asked a question related to Image Processing
Question
6 answers
I'm looking for a PhD position and opportunity in one of the English speaking university in European countries (or Australia).
I majored in artificial intelligence. I am in the field of medical image segmentation and My thesis in master was about retinal blood vessels extraction based on active contour. Skilled in Image processing, machine learning, MATLAB and C++.
So could anybody helps me to find a prof and PhD position related on my skills in one of the English speaking university?
Relevant answer
Answer
You have any examination is cleared than you go for scholarship
Like in NET etc
  • asked a question related to Image Processing
Question
3 answers
recently i am collecting red blood cells dataset for classifying into 9 categories of Ninad Mehendale research paper. can anyone suggest the dataset for Red Blood Cell Classification Using Image Processing and CNN this papeer?
  • asked a question related to Image Processing
Question
6 answers
There are shape descriptors: circularity, convexity, compactness, eccentricity, roundness, aspect ratio, solidity, elongation.
1) What are the real formulas for determining these descriptors?
2) circularity = roundness? solidity = ellipticity?
I compared lectures (M.A. Wirth*) with ImageJ (Fiji) user guide and completely confused: descriptors are almost completely different! Which source to trust?
*Wirth, M.A. Shape Analysis and Measurement. / M.A. Wirth // Lecture 10, Image Processing Group, Computing and Information Science, University of Guelph. – Guelph, ON, Canada, 2001 – S. 29
Relevant answer
Answer
What matters most is to be able to choose the right descriptor(s) for your task. Some prefer to throw as many descriptors as they can in a neural net and see what comes out. Not the best methodology.
  • asked a question related to Image Processing
Question
9 answers
Dear Researchers,
In the remote sensing application to a volcanic activity wherein, the objective is to determine the temperature, which portion (more specifically the range) of the EM spectrum can detect the electromagnetic emissions of hot volcanic surfaces (which are a function of the temperature and emissivity of the surface and can achieve temperature as high as 1000°C)? Why?
Sincerely,
Aman Srivastava
Relevant answer
Answer
8-15
  • asked a question related to Image Processing
Question
5 answers
I have grayscale images obtained from SHG microscopy for human cornea collagen bundles, and I have them as tiff stack images and their Czi format. I want to convert those 2D images into a 3D volume but I could not find any method that can be done using MATLAB, Python, or any other program?
Relevant answer
Answer
If you know the physical dimensions of your images and the images in the stack are properly aligned (consecutive), you can create a 3D volume in matlab, then write that volume as nifti (normally for neuroimaging, but should do the trick). There are many tools that can work with nifti, perform 3D volume rendering etc., such as 3D slicer. It is just a representation, the important thing is to have the mapping between physical and image coordinates.
  • asked a question related to Image Processing
Question
4 answers
Hello dear researchers.
It seems that siam rpn algorithm is one of the very good algorithms for object tracking that its processing speed on gpu is 150 fps.But the problem is that if your chosen object is a white phone, for example, and you are dressed in white and you move the phone towards you, the whole bunding box will be placed on your clothes by mistake. So, low sensitivity to color .How do you think I can optimize the algorithm to solve this problem? Of course, there are algorithms with high accuracy such as siam mask, but it has a very low fps. Thank you for your help.
Relevant answer
Answer
Thanks for your valuable answer
  • asked a question related to Image Processing
Question
4 answers
Hi
I'm trying to acquire raw data from Philips MRI.
I followed the save raw data procedures and then I obtained a .idx and a .log file.
I'm not sure if I implemented the procedure correctly.
Are .idx and .log file the file format of Philips MRI raw data?
If so, how to open these files? Is it possible to open these files in matlab?
Thanks
Judith
Relevant answer
Hi, medical image are in DICOM format. But you can manipulate the raw data by using viewer such as Osirix and Horos (Apple). But it still depend on what you want to look at. Certain Philips workstation like ISP can process raw data that you want to extract.
  • asked a question related to Image Processing
Question
5 answers
How can I tell the distance and proximity as well as the depth of image processing for object tracking? One idea that came to my mind was to detect whether the object was moving away or approaching based on the size of the image.But I do not know if there is an algorithm that I can implement based on?
In fact, how can I distinguish the x, y, z coordinates from the image taken from the webcam?
Thank you for your help
  • asked a question related to Image Processing
Question
59 answers
Hi,
What are the main image processing journals that publish work on the collection, creation and classification of medical imaging databases such as Medical Image Analysis Journal.
Thank you for your support,
Relevant answer
Answer
You can check this list for more information about journals.
Deleted research item The research item mentioned here has been deleted
  • asked a question related to Image Processing
Question
5 answers
I am using transfer learning using pre-trained models in PyTorch for the Image classification task.
When I modified the output layer of the pre-trained model (e,g, alexnet) as per our dataset and run the code for seeing the modified architecture of alexnet it gives output as "none".
Relevant answer
I try to replicate your code, and I don't get "None", I just get an error when I try to do an inference with the model (see image-1). In your forward you do it:
def forward(self, xb):
xb = self.features(xb)
xb = self.avgpool(xb)
xb = torch.flatten(xb,1)
xb = self.classifier(xb)
return xb
but features, avgpool and classifier are "variables" of network, then you need to do:
def forward(self, xb):
xb = self.network.features(xb)
xb = self.network.avgpool(xb)
xb = torch.flatten(xb,1)
xb = self.network.classifier(xb)
return xb
then when I run the forward again, everything looks ok. (see Image-2)
If this not work for you, could you share your .py? I need to check the functions: to_device, evaluate, and check the ImageClassificationBase class. To replicate the error and be able to identify where it is.
  • asked a question related to Image Processing
Question
4 answers
Hi Everyone, I'm currently converting video into images where I noticed 85% of the images doesn't contain the object. Is there any algorithm to check whether an image contains an object or not using the objectness score?
Thanks in advance :)
Relevant answer
Answer
If it is a video and you want to detect objects coming into the field of view, you could simply use 'foreground detection' - refer to <<https://au.mathworks.com/help/vision/ref/vision.foregrounddetector-system-object.html>>.