Science topic

Reasoning - Science topic

Explore the latest questions and answers in Reasoning, and find Reasoning experts.
Questions related to Reasoning
  • asked a question related to Reasoning
Question
1 answer
Agroforestry is considered and identified as a good land management practice. But most of the farmers prefer monoculture or mixed cropping system, But not agroforestry provide lots of tangible and non- tangible befit to the ecosystems. However it is not popular. Any reasons? .
Relevant answer
Answer
Agroforestry, acknowledged for its manifold benefits to ecosystems, is less favored among farmers compared to monoculture or mixed cropping systems due to various reasons. Primarily, economic incentives favor the latter, offering immediate and predictable returns over the longer-term investment required for agroforestry. Additionally, lack of awareness, insecure land tenure, limited market access and insufficient technical support contribute to its lower adoption rates. Farmers may perceive agroforestry as riskier due to uncertainties in tree establishment, pest management and potential conflicts with existing land uses. Despite its potential, overcoming these barriers and promoting the benefits of agroforestry are essential for its wider acceptance and implementation in agricultural landscapes.
  • asked a question related to Reasoning
Question
5 answers
The surface of the target may have gathered some coating at places during deposition. These spots seem to be little less conducting compared to the other fresh parts of the target. Will it be safe for the target if we try to clear this off via polishing with sand paper? If yes, then which grade of sand paper should be reasonable to use? Or are there some other better ways to clean in such cases?
Relevant answer
Answer
Thanks a lot to you both. The discussion has been really helpful for me.
  • asked a question related to Reasoning
Question
1 answer
Please ask if you need additional information.
Relevant answer
Answer
It would be useful to know what concentration of DAPI you used and fore how long in order to troubleshot.
Additionally, there are some very bright spots present, and I'm uncertain whether they are cells or debris. If you have exposure set on automatic this could be causing your microscope to adjust gain or exposure time to this very bright sports leaving your nuclei underexposed. Could also be that you have your contrast set in automatic in the software in which you are seeing/exporting the images and again this bright spots are interfering. If the second is true, then you would only need to adjust your contrast before exporting, overexposing those dots but correctly exposing your cells.
Hope this helps!
  • asked a question related to Reasoning
Question
1 answer
I have been trying to isolate RNA from magur fish testis, at the end of RNA isolation, RNA pellets becoming gel when DEPC water is added, what could be the possible reason for that? What is the solution?
Relevant answer
Answer
It is most likely the result of impurities like salts, proteins, or other organic components co-precipitating with the RNA during the isolation process that causes RNA pellets from testis samples to become gel-like during the isolation process. These impurities may prevent the RNA pellet from dissolving in water or RNA resuspension buffer, resulting in the production of a gel-like material rather than a transparent RNA solution.
  • asked a question related to Reasoning
Question
1 answer
Hello. I am Bakhtier Rasulov. For many years I have been a member of the scientific and social network ResearchGate. My scientific publications amounting to about 50 articles were presented there. Today I discovered that none of my work is on the list. I don't know what is the reason for this. I kindly ask you to help me understand this issue. How to get articles back?
Relevant answer
Answer
This is very strange, usually research items do not disappear from profiles. Some weeks ago, a user asked a similar questions, and in this case he had two profiles. However, I think that the two profiles with very similar names (https://www.researchgate.net/profile/Bakhtiyor-Rasulov and https://www.researchgate.net/profile/Bakhtiyor-Rasulev) are not yours.
I am sure that your publications were not completely removed from RG, but are still there, just no longer connected to your profile. Here are three methods to find them and to claim authorship (see also "How do I confirm authorship of my publications to add them to my profile?" in https://help.researchgate.net/hc/en-us/articles/14293005132305-How-to-add-research):
1. Open https://www.researchgate.net/profile/Bahtijor-Rasulov/unconfirmed (visible only to you). There may be a list of publications for which you may claim authorship, provided these are yours.
2. Search for your name or for the titles of your missing papers at Google, adding site:researchgate.net, e.g.: https://www.google.com/search?q="Bahtijor+Rasulov"+site%3Aresearchgate.net
3. Search for more titles of your missing papers at https://www.researchgate.net/search.Search.html?query=&type=publication.
  • asked a question related to Reasoning
Question
2 answers
Since I received the new orders of this nichrome wire sold by Phymep but fabricated by A-M Systems, it is totally impossible for me (and for others) to build the twisted electrodes as usual without them breaking all the time... Lateron, I learned that the reason was a change in their fabrication, and in particular the coating. Thus, I cannot use this new wire at all but need the same as before to build my electrodes. Would you know another company fabricating it ?
Relevant answer
Answer
Hey! Have the same issue, is there any progress? Has anyone tried wires from https://www.alleima.com/en/ or https://calfinewire.com/?
  • asked a question related to Reasoning
Question
2 answers
kindly Share reason behind this error why only two S-parameter are coming?
Relevant answer
Answer
I would expect that an antenna can only provide S11 and S21 as you cannot excite the output side of an antenna as it is the atmosphere or space. In the real world this is not possible unless you use a remote antenna.
S11 will give the reflection coefficient of the antenna and S21 the gain which can only be theoretical as you cannot connect to space.
An alternative would be to excite two antennas connected to each other in your simulation and then you would have two 50 ohm ports to excite and the outputs of each antenna would remove the need to connect to space. You would have to correct for the fact that there are two antennas in S22 and S12.
You have not said what type of simulation or what frequency you are using so it is unclear what approach you are using.
Regards
Greg
  • asked a question related to Reasoning
Question
4 answers
Hello
When we pattern a mask in a wafer, after exposure we noticed that the length of our patterns in the center of our wafer is thicker in copmarison with edges of it.
What is the reason and the solution.
Thanks for your helps.
Relevant answer
Answer
c)ensure during baking the wafer is placed flat on the hotplate..sometimes when the wafer is not flat, the resist might flow towards the edges
  • asked a question related to Reasoning
Question
1 answer
Thinking of journals as “low tier” or “high tier” is distracting from what you actually need to do, which is good work. Do your work as best you can, and publish in the fanciest journal that will accept your papers. Trying to over-optimize will lead to a lot of wasted time and lost productivity, which might end up hurting your chances for future jobs. And you also don’t want to give the impression that you’re more interested in the journal you publish in than in the work you do.
The main difference between a “high tier” journal and a “lower tier” one is that the high tier selects papers that have more surprising results, and are more likely to be wrong. You read that right — results in “fancy” journals such as Nature or Science are pretty often wrong, which makes sense given that they are “groundbreaking”. When doing groundbreaking work, it’s likely you’ll make a lot of mistakes; that’s the price you pay for going too far off the beaten path. Results that make it to more average journals are more mundane, but for that reason it’s also easier to make them more rigorous, and thus it’s less likely they’re wrong.
Remember also that no journal has dedicated reviewers. The people reviewing your work will be essentially the same, whether in Nature (impact factor >30) or in PLOS (impact factor about 3). So a fancy journal selects for how amazing your work sounds more than for how good it is.
That said, of course you want to stay away from the creepy “journals” that invite you to publish your work in them without getting your name right or having any idea what your work is about. It’s probably better to not publish at all than publish there. In some fields — like theoretical physics and most of math — people only really care about the results being available on a preprint server like the arXiv and don’t care where the work ends up being published. This is starting to become more popular in some areas of biology, too, with the bioRxiv (preprint server for Biology).
Another common sense thing to do is to first try publishing your work in a journal that’s just a bit fancier than you think your results are. If they reject your paper, go down the ladder. So if you’ve found a nice, new result that seems somewhat surprising, it’s worth considering submitting it to Nature or Science (or whatever the fancy journal of choice is in your field), just in case they like it. But if your work is a rather obvious extension of existing work, you might be better off heading for a more specialized journal that puts less emphasis on groundbreaking results.
Relevant answer
Answer
There are several factors to consider when deciding whether to publish in a low-tier journal as a PhD student:
Pros:
  • Get Published Faster: Lower-tier journals often have faster review times and publication cycles. This can be beneficial for PhD students who need publications to fulfill degree requirements or boost their CV for job applications.
  • Less Competitive: Low-tier journals may be less competitive than top-tier publications, making it easier for a PhD student to get their work accepted. This can be particularly encouraging for new researchers.
  • Gain Publishing Experience: The process of writing, submitting, and revising a manuscript is valuable for any researcher. Publishing in any journal can help you develop these skills.
Cons:
  • Limited Visibility and Impact: Low-tier journals typically have lower readership and citation rates compared to top-tier journals. This means your research may reach a smaller audience and have less influence on the field.
  • Potential Damage to Reputation: Publications in some low-tier journals might reflect poorly on your academic credibility. Carefully research the reputation of the journal before submitting.
  • Detracts from Job Prospects: While some publications are better than none, some employers, particularly in prestigious academic institutions, might prioritize candidates with publications in high-impact journals.
Alternative Strategies
  • Aim High, But Have a Backup Plan: Start by targeting high-impact journals in your field. Be prepared for potential rejections and have lower-tier options in mind if needed.
  • Conferences and Presentations: Presenting your research at conferences can be a valuable way to disseminate your findings and gain recognition, while also building your network.
  • Focus on Quality Research: Ultimately, the quality of your research is most important. Strong research, even if published in a lower-tier journal, will attract attention and citations.
Making the Decision:
The decision to publish in a low-tier journal should be made on a case-by-case basis. Consider the following factors:
  • Quality of the Journal: Research the journal's reputation, editorial board, and past publications.
  • Importance of Publication: How important is it for you to get published quickly?
  • Strength of Your Research: Is your research strong enough to be competitive in a top-tier journal?
General Advice:
  • Discuss with your advisor: Seek guidance from your PhD advisor regarding publication strategies and appropriate journals for your field.
  • Prioritize Quality: Always aim for reputable journals that uphold rigorous peer-review processes.
  • Build a Strong Publication Portfolio: Ideally, your PhD research should culminate in publications in a variety of journals, high-tier and potentially lower-tier depending on the specific project.
  • asked a question related to Reasoning
Question
2 answers
I have done Impedance analysis of a pellet with silver pasted electrodes in Hioki3536 LCR meter, but my data is so messy and Zig zag, in nyquist plot. I have make the Pellet again an the data is showing the semicircular trend. Please explain to me the possible reason that can be for this, is my silver pasting is wrong, is there any reason from the Pellet formation, or any other possible reasons.
Relevant answer
Hi Naveen
It seems you might have had one of the following issues with your electrode,
1- electrical connection of your paste to the surface of current collector. In another word, you do not have good connection in you system.
2- some times this issue happened because of week adhesion of paste both to the current collector and itself as an structure. It means, particles will separate from the electrode surface and it will affect all equilibrium on the electrode surface which means you may see fluctuation in data.
3- sometimes it happen because of external connection to the cell, like clips, wires, and etc. you should check all connections before start any EIS measurement. all connection should be neat without and oxide or sulfide on the wires and clips. as you might know, you are going to measure variation of current in the range of nA, Micro A, Or at most mA and it is very important to have perfect connections all over the cell.
cheers
  • asked a question related to Reasoning
Question
1 answer
I am thrilled to announce that we have launched a new journal in the fields of TESOL and Applied Linguistics entitled "TESOL Today" which is published by EngiScience Publisher. I act as the Editor-in-Chief of the journal and I very much look forward to receiving your original contributions. TESOL Today has no processing or publication charges to be applied. For more information about the journal and details, please click on the link provided. https://engiscience.com/index.php/tesol
The first of its kind across Iraq and Kurdistan Region, this is an exciting time for me and for our fellow researchers as the launch of this new journal dedicated to the field of TESOL and Applied Linguistics has been long my dream come true to be launched in the region. The launch of this journal is a proper and dire need for the TESOL and Applied Linguistics researchers and community around the region. TESOL Today will serve as a publication intended for researchers in Kurdistan, Iraq and the rest of the world as well.
The founders of TESOL Today are dedicated and determined to the establishment of a high-quality international journal in the field. Our overarching goal is to be an international reputable journal that will be highly respected among the TESOL and Applied Linguistics community across the globe. That said, my colleagues and I do hope to achieve our goals within a relatively reasonable short period of time as I am fully supported by an excellent group of people which makes me convinced that we will be a desirable journal in the field of English language in education. I am truly excited to have this amazing opportunity to lead the journal as the Editor-in-Chief. Once again, I look forward to receiving original research articles from both local and international researchers in the field.
A bi-annual journal by EngiScience Publisher, TESOL Today invites research contributions across the broad spectrum of Teaching English to Speakers of Other Languages (TESOL) and Applied Linguistics. As an international, double-blind peer-reviewed, open-access platform, we are committed to showcasing cutting-edge studies. Our open access policy ensures that your research gains global visibility, becoming instantly accessible to a worldwide audience.
Editor-in-Chief,
Karwan Mustafa Saeed, PhD
Relevant answer
Answer
Dear Karwan,
TESOL = ? Nevertheless, I have forwarded your info to colleagues in Kazakhstan who are philologists in Astana and Almaty, respectively. I have read there, at the Eurasian Gumilyov University and the Al Fahrabi University, my program 'Science - Language - Society'. Its 2 pillars are
1) "Language who creates and thinks for you" (Friedrich Schiller),
2) '1984' and 'LTI' (En: Language of the Third Reich).
Of course, it reaches to modern developments like 'to google'.
Moreover, I could publish about Huygens' principle for linguistics.
Looking forward,
Peter
  • asked a question related to Reasoning
Question
12 answers
Many times in physics, the equations accurately model certain aspects of reality, but the explanation is not compatible with philosophical truths. For example, in GTR, time is often believed to have begun with the universe and this violates reason: how did the universe begin without time? In the link shown below, I setup a framework in which logic is never violated, and the conclusions are compatible with STR, GTR, and QM. I would value your opinion on them. Thanks.
Relevant answer
Answer
Recent SS posts in
- and in https://www.researchgate.net/post/What-criteria-do-you-have-in-mind-while-characterizing-a-scientific-theory/30 , pages 29, 30, 31 though; that always is necessary to have in mind at considering ant scientific – or non-scientific, if that isn’t in accordance with the criteria – texts,
- are relevant to this thread question.
Cheers
  • asked a question related to Reasoning
Question
1 answer
I need the reason why stored food crops become toxic to living organisms when eaten.
Relevant answer
Answer
ChatGPT summarizes it as follows:
When substances remain in storage for a long time, they can become toxic to living organisms for several reasons:
Chemical Reactions: Substances can react with the environment or other substances in the storage area to form toxic by-products.
Contamination: Contaminants can enter the storage area and mix with the substance, making it toxic.
Decay: Some substances can break down over time and produce toxic components.
Exposure to light or heat: Exposure to light or heat can speed up chemical reactions that can produce toxins.
Microbial activity: Microorganisms can develop in stored substances and produce toxic metabolites.
It is therefore important to monitor storage conditions and check substances regularly to prevent the formation of toxic substances.
  • asked a question related to Reasoning
Question
2 answers
Summary:
The gender difference is determined in their last rings. Communication takes place through the North Pole of both.
Hints:
1- At the moment of connection, the north pole of both stars are in the same line.
2- In each of the stars, the two internal rotations must be opposite each other.
Example of reversed poles: like the planet Venus where the poles have changed.
3- Sperms are sent from the male North Pole to the female North Pole.
4- Any sperm that is thrown into the orbit around the female star will not be fertilized. But sperms that are in polar rotation are fertilized.
5- Mother star: rotating orbits of the north pole of the first male and the second female. And the rings are repeated between male and female.
According to the attraction of the male sperm in each circle, there will be a girl or a boy from the female star.
6- The rotation of sperm occurs only in the North Pole and around the vertical axis of the poles.
7- By increasing the mass of the sperm and speeding up the movement, the sperm is thrown out.
8- The born child has all the nuclear and rotational characteristics of the parents.
I explained the reason for the relationship and the reason for enjoying the relationship in an article.
I discovered a new nuclear model. This model of atoms and stars is the same.After presenting the nuclear model, I will go into the full description of the marriage of the stars, calculations and mathematical formulas in full detail.
Life, freedom, happiness.
Relevant answer
Answer
Hints:
1- In space: any mass that has a nucleus reverses its internal rotation in a period of time.
Example: In another period of time, the rotation of the planet Earth will be from east to west. At the same time, the planet Venus will rotate from west to east. Proof with mathematical calculations.
2- Rotating orbits around the north-south axis, in the female star: it is only for sperm particles.
The rotation of all objects is around the center of the star. According to the second figure.
3-
Why does the sperm particle not revolve around the center of the star?
What is the reason for the formation of the polar orbit and how does the sperm particle rotate around the North Pole orbit?
I will provide the answers to these questions with complete calculations in an article.
Unfortunately, I cannot give more detailed explanations and only wrote a summary. First, I need to present and register the new nuclear model I discovered.
  • asked a question related to Reasoning
Question
2 answers
Hello Everyone,
I conducted a study creating a pre-post one-group design (Pre-experimental) and stated only directional hypothesis (Alternative one). I received the comment from a reviewer as follows:
"Why did you prefer alternative hypothesis over the null hypothesis?"
I chose this unidirectional hypothesis because my Literature Reviews showed positive results in previous studies.
One statististician says it is okay. However, one expert says "Even if the review suggests a positive relation, we should work on the Null hypothesis".
I also ready a few research with only H1 for the same reason stated above.
Could there be any other explanations?
Please help me respond it with references.
Regards,
Saleem
Relevant answer
Answer
The classic null hypothesis is no relationship where you want to determine whether your results allow you to reject the null. Typically, you only propose a directional hypothesis as the null when you actually expect to find something different (ie.e., no relationship or a relationship in the opposite direction).
  • asked a question related to Reasoning
Question
1 answer
Upon observing anyone’s innermost life, would character flaws always be found? How? Why? My answer: Yes because humans are bound to be the least ethical creatures among other reasons.
Sources:
Relevant answer
Answer
There is a chance that a character flaw would be found
  • asked a question related to Reasoning
Question
3 answers
Hello, I wonder about calculating the extinction coefficient of colorful solutions.
I measured the absorbance of some solutions with a UV-vis spectrophotometer (Left).
Using two relations, I calculated the absorption coefficient (α) and extinction coefficient (k).
The final k values of solutions I calculated are about 10^-4 ~10^-5.
Are the k values reasonable considering the solution colors in the optical images?
I'm not sure whether the k values should be higher.
Please give me some advice if you have any ideas.
Thank you.
Relevant answer
Answer
However, Absorption coefficient is Extinction coefficient. By Burns (1993) this value alpha = log ((I0)/I)/d, where I0 and I are the intensities of incident and emergent light, respectively; d is the thickness, in cm, of the material or optical path length in the medium. Absorption coefficient and Extinction coefficient are function of the wavelength or wavenumber. May be, you mean the Molar extinction coefficient?
  • asked a question related to Reasoning
Question
4 answers
Hi! There was some kind of glitch on my ResearchGate page: the citations of my 104 articles, books and other publications suddenly turned out to be zero (see screenshot). What's reason of this surprise?
Regards, Dr. Mikhail Pletnev
Relevant answer
Answer
Actually, I've never been into self-citation...
  • asked a question related to Reasoning
Question
2 answers
Today morning my citation records were there, but now all citation records disappear. What is the reason?
Relevant answer
Answer
Would help if RG could update their members on this issue.
  • asked a question related to Reasoning
Question
2 answers
Dear all,
For my last two experiments, my supposedly endothelial cells (differentiated from bone marrow-derived mesenchymal stem cells, at passage ~35) have detached from Transwell inserts 1-2 days following seeding, looking as if I trypsinized them, and creating some cell clumps.
I expand (for 2 days) and differentiate (for 3 days) them in 48-well plates. Then I expose them to endothelial medium for one day. On the second day of endothalial medium, I transfer them to Transwell inserts that have been coated with Fibronectin and Collagen Type I. When I check 3-4 hours after seeding, I observe that they nicely attach. However, either the next day or the other day, they detach from the Transwells (Corning 3740) and I can't find the reason why.
In both of the experiments, I changed the media of the Transwells the following day after seeding. I inspected the cells both before and after the medium change. In one of them, the cells detached right after medium change although I aspirated the old medium very slowly (on the minimum speed of the vacuum suction and without touching to the membrane). In the other experiment, the cells were (mostly) fine after the medium change. But the next day after medium change (two days after seeding onto Transwells) they had detached.
The possibilities I could rule out are:
- There should be no problem with the medium contents/temperature/CO2 concentration/coating because I'm seeding the same cells to coated 48-well plates as well and applying the same conditions on them; and they stay healthy & alive.
- There is no contamination in the plates.
- It's not because they are over-crowded, I'm trying to form a monolayer indeed but they are sparsely distributed and thus they shouldn't be dying from over-confluency.
- I believe it is not about the force my medium change exerts on the cells either, because in one of the experiments cells looked fine after the medium change.
What do you think the reason could be?
Thanks in advance!
Relevant answer
Answer
Hi Mohammad,
Thanks a lot for sharing your experience!
I switched to iPSCs after trying with MSCs for a long time, and keep failing. I usually did not have a problem with the iPSC-derived endothelial cells (iECs) on Transwells, but I was doing half-medium change each day. They were fine when I had to do whole-medium change for a permeability assay too, but I'm changing the media slowly, and never use vacuum suction on Transwells. Although this may explain why the MSCs were detaching right after the media change, I'm still clueless about why they would detach later despite looking fine immediately after a medium change.
We are culturing iECs as well as HUVECs on PDMS microfluidic chips, besides Transwells. For iECs, we coat the PDMS surfaces with 100 ug/mL Fibronectin and 50 ug/mL Collagen type I (this is the same recipe we use for Transwells), following a 1-hour surface activation with UV light (for HUVECs, the Collagen concentration is 100 ug/mL). I believe your Fibronectin concentration should suffice. The endothelial cells are indeed sensitive to medium change from what we have observed, so we do it as slow as possible. Despite this, for instance yesterday evening, most of my iECs had detached from the microfluidic chip that they were nicely attached yesterday morning (and they were under a constant flow of 4 uL/h, so I did not exert an extra force with micropipettes for medium change). There must be reasons, which we're not yet aware of, for the detachment problem, thus I'm looking forward to hearing from others too!
Best regards,
  • asked a question related to Reasoning
Question
2 answers
Aside from defining the disulfide bond, what other reasons could account for the significant difference in RMSD of the docked structure and co-crystal?
Relevant answer
Answer
Ayaz Anwar Thank you for your contribution. Precise and great!
All noted.
Thanks.
  • asked a question related to Reasoning
Question
5 answers
There are two graphs [a] is a multi-wall carbon nanotube and [b] is a carboxylated multi-wall carbon nanotube (functionalized). I think there is a problem with the result, especially with a graph and I don't know the reason. I appreciate it if you could help me.
Relevant answer
Answer
As stated above the spectra are very weak and the ripples on both spectra could be due to interference bands due to internal reflection in the tubes. As stated there is no sign of the usual carboxylate bands in the 1700cm-1 region. However the strong band at about 1550cm-1 in spectrum (a) is similar to some carboxylic acid salts. This is reduced in (b) with a strong OH at about 3400 and 1000 cm-1. The baseline looks very flat and the OH bands appear out of proportion. Has a background correction "flat" been applied which could account for the strange band shapes ? The bands at about 2300 cm-1 are due to in-balance in the CO2 band between the background and sample scans.
  • asked a question related to Reasoning
Question
1 answer
Can we stop global climate change? Does human scientific power reach the world's climate change? How do researchers respond?
As you know, humans are very intelligent and can predict the future climate of the world with hydrology, climatology and paleontology. But don't countries, especially industrialized countries, that produce the most harmful gases in the earth's atmosphere and think about the future of the earth's atmosphere? Do they listen to the research of climatologists? What would have to happen to force them to listen to climate scientists?
Miloud Chakit added a reply
Climate change is an important and complex global challenge, and scientific theories about it are based on extensive research and evidence. The future path of the world depends on various factors including human actions, political decisions and international cooperation.
Efforts to mitigate and adapt to climate change continue. While complete reversal may be challenging, important steps can be taken to slow progression and lessen its effects. This requires global cooperation, sustainable practices and the development and implementation of clean energy technologies.
Human scientific abilities play an important role, but dealing with climate change also requires social, economic and political changes. The goal is to limit global warming and its associated impacts, and collective action at the local, national, and international levels is essential for a more sustainable future.
Reply to this discussion
Osama Bahnas added a reply
It is impossible to stop global climate change. The human scientific power can not reach the world's climate change.
Borys Kapochkin added a reply
Mathematical models of increasing planetary temperature as a function of the argument - anthropogenic influence - are erroneous.
Alastair Bain McDonald added a reply
We could stop climate change but we won't! We have the scientific knowldge but not the political will. One could blame Russia and China from refusing to cooperate but half the population of the USA (Republicans) deny climate change is a problem and prefer their profligate life styles reply:
All climate change has been loaded on the CO2 responsible for the greenhouse effect. Therefore, there must be scientific experiments from several independent scientific institutes worldwide to find out what the greenhouse impact is on various CO2 concentrations. Then, there must be a conference from a reliable, professional organization with the participation of all independent scientific institutions to establish standards on CO2 concentrations and propose political actions accordingly.
The second action that can be done is to plant as many trees and plants as possible to breathe the CO2 and free the oxygen. Stop any deforestation and plant trees immediately in any bunt areas.
Reply to this discussion
Effect of Injecting Hydrogen Peroxide into Heavy Clay Loam Soil on Plant Water Status, NET CO2 Assimilation, Biomass, and Vascular Anatomy of Avocado Trees
In Chile, avocado (Persea americana Mill.) orchards are often located in poorly drained, low-oxygen soils, situation which limits fruit production and quality. The objective of this study was to evaluate the effect of injecting soil with hydrogen peroxide (H2O2) as a source of molecular oxygen, on plant water status, net CO2 assimilation, biomass and anatomy of avocado trees set in clay loam soil with water content maintained at field capacity. Three-year-old ‘Hass’ avocado trees were planted outdoors in containers filled with heavy loam clay soil with moisture content sustained at field capacity. Plants were divided into two treatments, (a) H2O2 injected into the soil through subsurface drip irrigation and (b) soil with no H2O2 added (control). Stem and root vascular anatomical characteristics were determined for plants in each treatment in addition to physical soil characteristics, net CO2 assimilation (A), transpiration (T), stomatal conductance (gs), stem water potential (SWP), shoot and root biomass, water use efficiency (plant biomass per water applied [WUEb]). Injecting H2O2 into the soil significantly increased the biomass of the aerial portions of the plant and WUEb, but had no significant effect on measured A, T, gs, or SWP. Xylem vessel diameter and xylem/phloem ratio tended to be greater for trees in soil injected with H2O2 than for controls. The increased biomass of the aerial portions of plants in treated soil indicates that injecting H2O2 into heavy loam clay soils may be a useful management tool in poorly aerated soil.
Shade trees reduce building energy use and CO2 emissions from power plants
Urban shade trees offer significant benefits in reducing building air-conditioning demand and improving urban air quality by reducing smog. The savings associated with these benefits vary by climate region and can be up to $200 per tree. The cost of planting trees and maintaining them can vary from $10 to $500 per tree. Tree-planting programs can be designed to have lower costs so that they offer potential savings to communities that plant trees. Our calculations suggest that urban trees play a major role in sequestering CO2 and thereby delay global warming. We estimate that a tree planted in Los Angeles avoids the combustion of 18 kg of carbon annually, even though it sequesters only 4.5-11 kg (as it would if growing in a forest). In this sense, one shade tree in Los Angeles is equivalent to three to five forest trees. In a recent analysis for Baton Rouge, Sacramento, and Salt Lake City, we estimated that planting an average of four shade trees per house (each with a top view cross section of 50 m2) would lead to an annual reduction in carbon emissions from power plants of 16,000, 41,000, and 9000 t, respectively (the per-tree reduction in carbon emissions is about 10-11 kg per year). These reductions only account for the direct reduction in the net cooling- and heating-energy use of buildings. Once the impact of the community cooling is included, these savings are increased by at least 25%.
Can Moisture-Indicating Understory Plants Be Used to Predict Survivorship of Large Lodgepole Pine Trees During Severe Outbreaks of Mountain Pine Beetle?
Why do some mature lodgepole pines survive mountain pine beetle outbreaks while most are killed? Here we test the hypothesis that mature trees growing in sites with vascular plant indicators of high relative soil moisture are more likely to survive mountain pine beetle outbreaks than mature trees associated with indicators of lower relative soil moisture. Working in the Clearwater Valley of south central British Columbia, we inventoried understory plants growing near large-diameter and small-diameter survivors and nonsurvivors of a mountain pine beetle outbreak in the mid-2000s. When key understory species were ranked according to their accepted soil moisture indicator value, a significant positive correlation was found between survivorship in large-diameter pine and inferred relative high soil moisture status—a finding consistent with the well-documented importance of soil moisture in the mobilization of defense compounds in lodgepole pine. We suggest that indicators of soil moisture may be useful in predicting the survival of large pine trees in future pine beetle outbreaks. Study Implications: A recent outbreak of the mountain pine beetle resulted in unprecedented levels of lodgepole pine mortality across southern inland British Columbia. Here, we use moisture-dependent understory plants to show that large lodgepole pine trees growing in sites with high relative moisture are more likely than similar trees in drier sites to survive severe outbreaks of mountain pine beetle—a finding that may be related to a superior ability to mobilize chemical defense compounds compared with drought-stressed trees.
Can Functional Traits Explain Plant Coexistence? A Case Study with Tropical Lianas and Trees
Organisms are adapted to their environment through a suite of anatomical, morphological, and physiological traits. These functional traits are commonly thought to determine an organism’s tolerance to environmental conditions. However, the differences in functional traits among co-occurring species, and whether trait differences mediate competition and coexistence is still poorly understood. Here we review studies comparing functional traits in two co-occurring tropical woody plant guilds, lianas and trees, to understand whether competing plant guilds differ in functional traits and how these differences may help to explain tropical woody plant coexistence. We examined 36 separate studies that compared a total of 140 different functional traits of co-occurring lianas and trees. We conducted a meta-analysis for ten of these functional traits, those that were present in at least five studies. We found that the mean trait value between lianas and trees differed significantly in four of the ten functional traits. Lianas differed from trees mainly in functional traits related to a faster resource acquisition life history strategy. However, the lack of difference in the remaining six functional traits indicates that lianas are not restricted to the fast end of the plant life–history continuum. Differences in functional traits between lianas and trees suggest these plant guilds may coexist in tropical forests by specializing in different life–history strategies, but there is still a significant overlap in the life–history strategies between these two competing guilds.
The use of operator action event trees to improve plant-specific emergency operating procedures
Even with plant standardization and generic emergency procedure guidelines (EPGs), there are sufficient dissimilarities in nuclear power plants that implementation of the guidelines at each plant must be performed in a manner that ensures consideration of plant-specific design features and operating characteristics. The use of operator action event tress (OAETs) results in identification of key features unique to each plant and yields insights into accident prevention and mitigation that can be factored into plant-specific emergency procedures. Operator action event trees were developed as a logical extension of the event trees developed during probabilistic risk analyses. The dominant accident sequences developed from a plant-specific probabilistic risk assessment represent the utility's best understanding of the most likely combination of events that must occur to create a situation in which core cooling is threatened or significant releases occur. It is desirable that emergency operating procedures (EOPs) provide adequate guidance leading to appropriate operator actions for these sequences. The OAETs provide a structured approach for assuring that the EOPs address these situations.
Plant and Wood Area Index of Solitary Trees for Urban Contexts in Nordic Cities
Background: We present the plant area index (PAI) measurements taken for 63 deciduous broadleaved tree species and 1 deciduous conifer tree species suitable for urban areas in Nordic cities. The aim was to evaluate PAI and wood area index (WAI) of solitary-grown broadleaved tree species and cultivars of the same age in order to present a data resource of individual tree characteristics viewed in summer (PAI) and in winter (WAI). Methods: All trees were planted as individuals in 2001 at the Hørsholm Arboretum in Denmark. The field method included a Digital Plant Canopy Imager where each scan and contrast values were set to consistent values. Results: The results illustrate that solitary trees differ widely in their WAI and PAI and reflect the integrated effects of leaf material and the woody component of tree crowns. The indications also show highly significant (P < 0.001) differences between species and genotypes. The WAI had an overall mean of 0.91 (± 0.03), ranging from Tilia platyphyllos ‘Orebro’ with a WAI of 0.32 (± 0.04) to Carpinus betulus ‘Fastigiata’ with a WAI of 1.94 (± 0.09). The lowest mean PAI in the dataset was Fraxinus angustifolia ‘Raywood’ with a PAI of 1.93 (± 0.05), whereas Acer campestre ‘Kuglennar’ represents the cultivar with the largest PAI of 8.15 (± 0.14). Conclusions: Understanding how this variation in crown architectural structure changes over the year can be applied to climate responsive design and microclimate modeling where plant and wood area index of solitary-grown trees in urban contexts are of interest.
Do Exotic Trees Threaten Southern Arid Areas of Tunisia? A Case Study Indian Journal of Ecology (2020) 00(0): 000-000 Plant-plant interactions
an afforested steppe planted This study was conducted in with aims to compare the effects of exotic and native Stipa tenacissima trees (and , respectively) on the understory vegetation and soil properties. For each tree species, two sub-Acacia salicina Pinus halepensis habitats were distinguished: the canopied sub-habitat (under the tree crown) and the un-canopied sub-habitat (open grassland). Soil moisture was measured in both sub-habitats at 10 cm depth. In parallel to soil moisture, investigated the effect of tree species on soil fertility. Soil samples were collected from the upper 10 cm soil, excluding litter and stones. The nutrient status of soil (organic matter, total N, extractable P) was significantly higher under compared to and open areas. This tendency remained constant with the soil water A. salicina P. halepensis content which was significantly higher under trees compared to open sub-habitats. For water content, there were no significant differences between studied trees. Total plant cover, species richness and the density of perennial species were significantly higher under the exotic species compared to other sub-habitats. Among the two tree species, had the strongest positive effect on the understory Acacia salicina vegetation. It seems to be more useful as a restoration tool in arid areas and more suitable to create islands of resources and foster succession than the other investigated tree species.
Effects of Elevated Atmospheric CO2 on Microbial Community Structure at the Plant-Soil Interface of Young Beech Trees (Fagus sylvatica L.) Grown at Two Sites with Contrasting Climatic Conditions
Soil microbial community responses to elevated atmospheric CO2 concentrations (eCO2) occur mainly indirectly via CO2-induced plant growth stimulation leading to quantitative as well as qualitative changes in rhizodeposition and plant litter. In order to gain insight into short-term, site-specific effects of eCO2 on the microbial community structure at the plant-soil interface, young beech trees (Fagus sylvatica L.) from two opposing mountainous slopes with contrasting climatic conditions were incubated under ambient (360 ppm) CO2 concentrations in a greenhouse. One week before harvest, half of the trees were incubated for 2 days under eCO2 (1,100 ppm) conditions. Shifts in the microbial community structure in the adhering soil as well as in the root rhizosphere complex (RRC) were investigated via TRFLP and 454 pyrosequencing based on 16S ribosomal RNA (rRNA) genes. Multivariate analysis of the community profiles showed clear changes of microbial community structure between plants grown under ambient and elevated CO2 mainly in RRC. Both TRFLP and 454 pyrosequencing showed a significant decrease in the microbial diversity and evenness as a response of CO2 enrichment. While Alphaproteobacteria dominated by Rhizobiales decreased at eCO2, Betaproteobacteria, mainly Burkholderiales, remained unaffected. In contrast, Gammaproteobacteria and Deltaproteobacteria, predominated by Pseudomonadales and Myxococcales, respectively, increased at eCO2. Members of the order Actinomycetales increased, whereas within the phylum Acidobacteria subgroup Gp1 decreased, and the subgroups Gp4 and Gp6 increased under atmospheric CO2 enrichment. Moreover, Planctomycetes and Firmicutes, mainly members of Bacilli, increased under eCO2. Overall, the effect intensity of eCO2 on soil microbial communities was dependent on the distance to the roots. This effect was consistent for all trees under investigation; a site-specific effect of eCO2 in response to the origin of the trees was not observed.
Reply to this discussion
Michael Senteza added a reply
We have to separate science from business and politics in the first place , before we can adequately discuss the resolution of this global challenge .
The considerations to global warming can be logically broken down in the following
1. What are the factors that have affected the earths climate over the last million years ? 100,000 years , 10,000 years and 1,000 years .
2. Observations , the climatic changes , formations , and archaeological data to support the changes
3. The actualities of the earth dynamics , for example we know that approx 2/3 of the earth is water and we also know that of the 1/3 we have approximately 60% un inhabitable , and the 40% habitable has approximately 10% who contribute to the alleged pollution , where for example as of 2022 (https://www.whichcar.com.au/news/how-many-cars-are-there-in-the-world) The US had 290 Million cars compared to Africa (50+ Countries ) 26 Million cars the EU (33 + countries ) 413 million cars then Asia pacific with 543 Million cars ( with a population of close to 2 billion ) . We estimate that as of may there are 1.45 billion cars . this means that North America , Western Europe and Asia pacific combined have approx 1.3 billion cars , and yet close to 70% of vegetation cover and forest space is concentrated in africa , south america , northern europe and canada. we need to analyse this
4. We need to also analyse the actualities of the cause separating factors outside our reach , for example global worming as opposed to climate change . We know that climate change which has been geologically and scientifically observed to have been the reason things like Oil came into place , species became extinct and other formations created . We need to realise that a fair share of changes in climate (which some times may be confused with global worming ) have been due to changes in the earth's rotation , axis and orbit around the sun . These are factors that greatly affect the distribution of the sun's radiation on to the surface of the earth and the atmospheric impact , them make consideration of how much we produce , the dispersion rate , natural chemical balances and volumetric analysis of concentration , assimilation and alteration of elements .
5. The extent to which non scientific factors are contributing to attenuating strength of scientific argument . It is not uncommon to have politicians alter the rhetoric to serve their agenda , however it's even worse when the sponsors of the scientific research are intent on achieving specific goals and not facts .
In conclusion humans are intelligent enough to either end of mitigation the impact of global worming if it can be detached from capitalism and politics . Science can and will provide answers
Relevant answer
Answer
We can only adapt to global climate snakes. See goal 13 of the concept of sustainable development .We only intelligent by definition (Н. sapiens), and nature is wise, knows better
  • asked a question related to Reasoning
Question
1 answer
Hello. This image is taken from la7 cancer stem cells. It seems that the cells that have a purple halo and seen shiner than other cells have a high resistance to starvation conditions. What kind of cells do you think these are and what is the reason for their resistance?
Relevant answer
Answer
The cells you've described with a purple halo and shinier appearance in the LA7 CSCs culture, which demonstrate a high resistance to starvation, could be presumed to be a subpopulation within the cancer stem cells known for their resilience in adverse conditions. The existence of such cells that appear distinct in terms of color and brightness could indicate a subset of cells with certain biological properties that confer this resistance.
Without a detailed analysis through biochemical and genetic investigations, it's hard to determine the exact identity of these cells.
  • asked a question related to Reasoning
Question
2 answers
Is synergetic control a model-free or model-based approach? Please tell me the reasons.
How about PID control?
Relevant answer
Answer
Synergetic Control:
Synergetic control, often involving the theory of synergetics which is related to self-organization and pattern formation in complex systems, can be applied in both model-free and model-based contexts. However, its implementation tends to be more aligned with a model-based approach. This is because synergetic control often requires a good understanding of the system dynamics to design the control laws that guide the system towards a desired self-organized pattern or behavior. It leverages mathematical models to predict and orchestrate the dynamics of the system components cooperatively, aiming for an optimal performance through self-organization principles.
PID Control:
PID (Proportional-Integral-Derivative) control, on the other hand, is primarily a model-free control strategy. It does not require a model of the process to be controlled; instead, it relies on adjusting the control inputs based on the error between the desired setpoint and the actual output. The PID controller adjusts its output using three terms – proportional, integral, and derivative – which are tuned based on the error over time. This approach makes PID control widely applicable in various situations where a detailed model of the system is not available or is difficult to develop.
In summary:
- Synergetic control is typically more model-based, requiring knowledge of the system’s dynamics to effectively drive the system towards a desired behavior using principles of self-organization.
- PID control is model-free, relying solely on feedback from the system to adjust its outputs, making it versatile and straightforward to implement in many different applications without detailed knowledge of the underlying system dynamics.
  • asked a question related to Reasoning
Question
1 answer
Temperature regulation is of utmost significance in pediatric anesthesia due to several reasons:
Relevant answer
Answer
Temperature regulation is of utmost significance in pediatric anesthesia due to several reasons:
  1. Increased Vulnerability to Hypothermia: Pediatric patients, especially neonates and infants, have a higher surface area-to-body mass ratio compared to adults. This results in increased heat loss through radiation, convection, evaporation, and conduction, making them more susceptible to hypothermia during the perioperative period.
  2. Impact on Metabolism and Oxygen Consumption: Hypothermia can lead to metabolic derangements, including decreased oxygen consumption, altered drug metabolism, and impaired coagulation function. Maintaining normothermia is crucial for preserving metabolic homeostasis and optimizing physiological functions during anesthesia and surgery.
  3. Cardiovascular Effects: Hypothermia can cause vasoconstriction and shivering, leading to increased systemic vascular resistance, peripheral vasoconstriction, and myocardial oxygen demand. These cardiovascular effects can exacerbate hemodynamic instability, especially in pediatric patients with congenital heart disease or compromised cardiac function.
  4. Respiratory Effects: Hypothermia-induced vasoconstriction and shivering can increase oxygen consumption and carbon dioxide production, potentially leading to respiratory acidosis and impaired gas exchange. Maintaining normothermia helps optimize respiratory function and oxygenation status during anesthesia and surgery.
  5. Impaired Wound Healing: Hypothermia can impair wound healing and increase the risk of surgical site infections by compromising immune function and delaying tissue repair processes. Maintaining normothermia promotes optimal wound healing and reduces the risk of postoperative complications.
  6. Neurological Effects: Hypothermia can adversely affect neurological function, leading to altered consciousness, delayed emergence from anesthesia, and increased risk of neurologic injury. Maintaining normothermia is essential for preserving neurological integrity and optimizing postoperative neurocognitive outcomes, especially in pediatric patients with developing brains.
  7. Prevention of Adverse Events: Hypothermia is associated with an increased risk of perioperative complications, including cardiac arrhythmias, coagulopathy, and wound dehiscence. Preventing hypothermia through active temperature management helps reduce the incidence of adverse events and improves overall perioperative outcomes in pediatric patients.
To address these concerns, anesthesia providers employ various strategies to maintain normothermia in pediatric patients, including prewarming of operating rooms and equipment, use of warming blankets or mattresses, warmed intravenous fluids, humidification of inspired gases, and active temperature monitoring throughout the perioperative period. By prioritizing temperature regulation as part of perioperative care, anesthesia providers can mitigate the risks associated with hypothermia and optimize outcomes for pediatric patients undergoing anesthesia and surgery.
  • asked a question related to Reasoning
Question
1 answer
We run a Sciex X500r qToF and tune it once a week currently. I've been told that negative tune has historically been poor on other systems. The peak intensities and width are fine, however the 520.9 m/z precursor ion dips in and out of the upper 2ppm (-2 to +2pmm) range. I can't pinpoint what in the system is causing this and is this something that I should be concerned about?
Relevant answer
It seems like you're experiencing fluctuations in mass accuracy for the 520.9 m/z precursor ion during negative tuning on your Sciex X500r qToF. While this may not directly impact peak intensities or width, it could affect data reliability. Here are some steps you could take to address the issue:
1. Check instrument conditions.
2. Verify calibration.
3. Review tuning parameters.
4. Ensure proper sample preparation.
5. Consider external factors.
6. Consult Sciex support if needed.
  • asked a question related to Reasoning
Question
2 answers
I've prepared the following ceramics:
Sample ID = Composition
LMT = Li2Mg3TiO6
LMCT = Li2(Mg2.88Ca0.12)TiO6
LMCT-MN = Li2(Mg2.88Ca0.12)(Ti0.95(Mg1/3Nb2/3)0.05)O6
LMT-MN = Li2Mg3(Ti0.95(Mg1/3Nb2/3)0.05)O6
I've got two entirely different microstructure. Why the last two are so different from the first two.
Relevant answer
Answer
Thanks for your answer
  • asked a question related to Reasoning
Question
2 answers
Of course I sometimes doubt the afterlife is eternal salvation for all, so, I live and deduce what it might be...
Relevant answer
Answer
Jeus can give man eternal life and redemption, according to the Scritpure.
  • asked a question related to Reasoning
Question
1 answer
1)Maybe I'm slightly less intuitive. I
consider myself kind of a skeptical empiricist/critical rationalist.
2)I don't believe concepts are eternal because they need to be adjusted to avoid contradictions.
3)Without some transcendence beyond materialism, we would NOT be able to reason.
4)Maybe reason is the ONLY absolute CONCEPT. And reason derives from God.
5)Concepts also aid execution thus, maybe I'm a more skeptical Aristotelian.
Sources:
Relevant answer
Answer
There is nothing wrong with being reasonable or rational in how your base your fundamental views and perception of the universe. However we live in a society where there are thousands of beliefs and variations of beliefs exist. In order to live together, we must use our intelligence for tolerance, to live with each other. It is not important to define our perceptions, but to understand them how they are.
  • asked a question related to Reasoning
Question
84 answers
By reason of the application of the Lorentz Factor [(1 - (v squared / c squared)) raised to the power of 1/2] in the denominator of equations, luminal and other comparable energy propagations take on one and the same velocity. This is the relativity-effect (better, comparative effect) between v of objects, compared to c of the speed of light. That is, it is presupposed here that c is the object of comparison for determining the speed effect of velocity difference across a duration.
It is against the criterion-velocity itself c that c becomes unsurpassable! Hence, I am of the opinion that the supposed source-independence is nothing but an effect of OUR APPARATUS-WISE OBSERVATION LIMIT AND OUR FIXING OF THE CRITERION OF OBSERVATION AS THE OBSERVED VELOCITY OF LIGHT.
In this circumstance, it is useless to claim that (1) luminal and some other energy propagations with velocity c are source-independent, and (2) these wavicles have zero rest mass, since the supposed source-independence have not been proved theoretically or experimentally without using c cas the criterion velocity. The supposed source-independence is merely an effect of c-based comparison.
Against this background, it is possible to be assured that photons and other similar c-wavicles are extended particles -- varying their size throughout the course of motion in the spiral manner. Hence the acceptability of the term 'wavicle'. Moreover, each mathematical point of the spiral motion is to be conceived not as two-, but as three-dimensional, and any point of motion added to it justifies its fourth dimension. Let us call motion as change.
These four dimensions are measuremental, hence the terms 'space' (three-dimensional) and 'time' (one-dimensional). This is also an argument countering the opinion that in physics and cosmology (and other sciences) time is not attested!
The measurements of the 3-space and measurements of the 1-time are not in the wavicles and in the things being measured. The measurements are the cognitive characteristics of the measurements.
IN FACT, THE EXTENSION OF THE WAVICLE OR OTHER OBJECTS IS BEING MEASURED AND TERMED 'SPACE', AND THE CHANGE OF THE WAVICLE OR OTHER OBJECTS IS BEING MEASURED AND TERMED 'TIME'. Hence, the physically out-there-to-find characteristics of the wavicles and objects are EXTENSION AND CHANGE.
Extension is the quality of all existing objects by which they have parts. This is not space. Change is the quality by which they have motion, i.e., impact generation on other similar wavicles and/or objects. This is not time. Nothing has space and time; nothing is in space and time. Everything is in Extension-Change.
Any wavicle or other object existing in Extension-Change is nothing but impact generation by physically existent parts. This is what we term CAUSATION. CAUSALITY is the relation of parts of physical existents by which some are termed cause/s and the others are termed effect/s. IN FACT, THE FIRST ASPECT OF THE PHYSICALLY ACTIVE PARTS, WHICH BEGINS THE IMPACT, IS THE CAUSE; AND THE SECOND ASPECT IS THE CAUSE. Cause and effect are, together, one unit of continuous process.
Since energy wavicles are extended, they have parts. Hence, there can be other, more minute, parts of physical objects, which can define superluminal velocities. Here, the criterion of measurement of velocity cannot be c. That is all...! Hence, superluminal velocities are a must by reason of the very meaning of physical existence.
THE NOTION OF PHYSICAL EXISTENCE ('TO BE') IS COMPLELTEY EXHAUSTED BY THE NOTIONS OF EXTENSION AND CHANGE. Hence, I call Extension and Change as the highest physical-ontological Categories. A metaphysics (physical ontology) of the cosmos is thus feasible. I have been constructing one such. My book-length publications have been efforts in this direction.
I invite your contributions by way of critiques and comments -- not ferocious, but friendly, because I do not claim that I am the last word in any science, including philosophy of physics.
Bibliography
(1) Gravitational Coalescence Paradox and Cosmogenetic Causality in Quantum Astrophysical Cosmology, 647 pp., Berlin, 2018.
(2) Physics without Metaphysics? Categories of Second Generation Scientific Ontology, 386 pp., Frankfurt, 2015.
(3) Causal Ubiquity in Quantum Physics: A Superluminal and Local-Causal Physical Ontology, 361 pp., Frankfurt, 2014.
(4) Essential Cosmology and Philosophy for All: Gravitational Coalescence Cosmology, 92 pp., KDP Amazon, 2022, 2nd Edition.
(5) Essenzielle Kosmologie und Philosophie für alle: Gravitational-Koaleszenz-Kosmologie, 104 pp., KDP Amazon, 2022, 1st Edition.
Relevant answer
  • asked a question related to Reasoning
Question
2 answers
I'm scared for security reasons, I have been looking for an easy way to format my manuscript for the european journal of pharmaceutics and biopharmaceutics, and others... does anybody have any recommendations for Scispace or other tools..?
Relevant answer
Answer
nop, I just used the one template from the journal cite
  • asked a question related to Reasoning
Question
1 answer
why am I getting high evolution on the fe-co electrodeposited films with higher scan rate(50mV/S) in KOH electrolyte? what could be the reason. im using square wave pulse voltammetry
Relevant answer
Answer
In square wave pulse voltammetry for electrodeposition of Fe-Co alloy films, observing higher current densities or more pronounced hydrogen evolution at higher scan rates (e.g., 50 mV/s) in a KOH electrolyte could be attributed to several factors:
· Mass transport limitations: At higher scan rates, the depletion of electroactive species (Fe2+ and Co2+) near the electrode surface occurs more rapidly due to the shorter timescale for diffusion to replenish the consumed ions. This mass transport limitation can lead to an increase in the current density due to the contribution of the hydrogen evolution reaction (HER) as a parallel process.
· Kinetic effects: The faster potential sweep at higher scan rates may not allow sufficient time for the deposition process to reach equilibrium conditions, favoring kinetically controlled reactions like the HER over the metal deposition process.
· Ohmic drop effects: At higher current densities associated with higher scan rates, the potential drop due to the solution resistance (iR drop) becomes more significant, leading to a shift in the effective potential at the electrode surface. This shift can promote the HER over metal deposition.
· Nucleation and growth mechanisms: The higher scan rates may influence the nucleation and growth mechanisms of the Fe-Co alloy films, affecting the surface morphology and potentially favoring the HER on certain surface sites or defects.
· Electrolyte composition: The alkaline KOH electrolyte is known to facilitate the HER, and at higher scan rates, the contribution of the HER may become more pronounced due to the kinetic and mass transport effects mentioned above.
To mitigate the high hydrogen evolution at higher scan rates, you could consider the following strategies:
a) Optimize the electrolyte composition, such as adjusting the concentrations of metal ions, complexing agents, or pH, to suppress the HER while favoring metal deposition.
b) Decrease the scan rate or use a different voltammetric technique (e.g., potentiostatic deposition) to minimize mass transport limitations and kinetic effects.
c) Modify the electrode surface or pre-treatment procedures to promote nucleation and growth of the desired Fe-Co alloy phase over the HER.
d) Investigate the use of alternative electrolytes or additives that can selectively inhibit the HER while promoting metal deposition.
It's important to note that the specific reasons may vary depending on the exact experimental conditions, and a combination of these factors could be contributing to the observed behavior. Further investigation and optimization may be required to achieve the desired electrodeposition characteristics.
  • asked a question related to Reasoning
Question
1 answer
When dealing with composite systems comprising a finite subsystem and an infinite one, the aim often involves numerically analyzing specific properties, such as entanglement entropy between these subsystems.
In the process of performing a partial trace over one of the subsystems, which approach yields more reliable results: tracing over the finite system or over an approximation of the infinite subsystem? Intuitively, tracing over the larger subsystem seems preferable since it reduces the dimensionality of the remaining matrix. However, does this reasoning hold when considering the faithfulness of the results?
Relevant answer
Answer
Sonlu ve sonsuz yaklaşımlarda amaç var olan bir netice elde etmektir ve sonucun doğruluk payı önemli. Yaklaşımın ne kadar uzun olduğu önemli değil.
  • asked a question related to Reasoning
Question
4 answers
I performed RT-PCR to validate my differentially expressed microRNAs. But I didn't get any accurate results. What is the reason? I also performed standard PCR (with a gradient temperature) to select the best temperature and got bands at almost all temperatures. My reference gene (U6 SnRNA) worked well in RT-PCR, but miRNA-specific primers didn't show any results (amplification has occurred).
Please help me to find the correct reason.
Relevant answer
Answer
Xiusheng Zhu Ok Sir. Thank you.
  • asked a question related to Reasoning
Question
16 answers
Dear colleagues
I am exploring the reasons that would make an individual study for 3-4 or more years to finish nursing. What do you think are the reasons to pursue this career?
Looking forward for your thoughts
Kind regards
Tiago
Relevant answer
Answer
Dear Kayeekuh Saintlouis thank you for your reply and for providing another insight into this reality. Appreciate it!
  • asked a question related to Reasoning
Question
8 answers
It's important to remember that while there may be conflicts and divisions among people based on various factors such as race, nationality, religion, or ideology, it is essential to recognize and respect the shared humanity that unites us all. Embracing our common humanity can help bridge divides and promote understanding and empathy.
The shared humanity that unites all of us refers to the fundamental qualities and characteristics that we all possess as human beings. These include:
  1. Emotions: We all experience a range of emotions, including joy, sadness, anger, fear, and love.
  2. Capacity for Reason: We all have the ability to reason, think critically, and solve problems.
  3. Social Connections: We all have a need for social connections and relationships with others.
  4. Physical Needs: We all have basic physical needs such as food, water, and shelter.
  5. Vulnerability: We are all vulnerable to illness, injury, and death.
  6. Diversity: We all come from different backgrounds, cultures, and experiences.
Recognizing and embracing these shared qualities can help us to connect with others, promote empathy and understanding, and work towards a more peaceful and just world.
What do you think?
Relevant answer
Answer
Thank you, Dr. Chandan Kumar, for sharing your insightful perspective!
Indeed, the human mind remains a fascinating frontier, with its intricate workings still largely uncharted. As we explore the intersection of electronic intelligence and human cognition, we witness remarkable advancements. Yet, amidst our sophistication, we grapple with our inherent differences—unique perspectives, needs, and interests.
Humanity, that delicate thread binding us all, sometimes frays due to our divergent paths. Norms and regulations exist, but their adherence varies. The resulting discord reverberates globally, creating complexities.
Yet, as you wisely note, the world persists, following its natural course. Acceptance becomes our compass. And perhaps, through collective effort, we can mend the fabric of humanity, weaving understanding, compassion, and positive change.
Warm regards,
Dawood
  • asked a question related to Reasoning
Question
2 answers
i am working on turbine simulation. can you tell me why my power graph is straight . i used this expression Power=torque*omega i also put the value(omega) but not answered? torque graph is showing here(rightly) Also, my solution in not converged? what can be reason?
Relevant answer
Answer
Sadiki Zakariae Dear Now i used 2D model of savonius wind turbine but same issue is appearing it has been convergent Fastly around 100 (i put 1000 titration )
  • asked a question related to Reasoning
Question
1 answer
reasons for differences in interest rates between countries ? ( developed and developing countries wise )
Relevant answer
Answer
1. Economic stability: Developed countries generally have more stable economies, lower inflation rates, and lower political risks, leading to lower interest rates compared to developing countries.
2. Creditworthiness: Developed countries typically have higher credit ratings and are considered less risky for lenders, resulting in lower interest rates. Developing countries may have lower credit ratings, making it riskier for lenders and leading to higher interest rates.
3. Financial market infrastructure: Developed countries usually have more sophisticated financial systems and institutions, which can lead to lower interest rates. Developing countries may lack a robust financial infrastructure, resulting in higher interest rates.
4. Currency stability: Developed countries often have more stable currencies, which can lead to lower interest rates. Developing countries may have more volatile currencies, leading to higher interest rates to compensate for currency risks.
5. Level of government debt: Countries with high levels of government debt may face higher borrowing costs, resulting in higher interest rates. Developed countries generally have lower levels of government debt compared to developing countries.
6. Inflation rates: Higher inflation rates in developing countries can lead to higher interest rates to compensate for the erosion of purchasing power. Developed countries typically have lower inflation rates, resulting in lower interest rates.
7. Foreign investment: Developed countries may attract more foreign investment due to their stable economic and political environments, leading to lower interest rates. Developing countries may offer higher interest rates to attract foreign investment and finance their development projects.
  • asked a question related to Reasoning
Question
3 answers
I synthesized graphene oxide during synthesis, and when I added H2O2, the solution turned green. What is the reason for this?
Relevant answer
Answer
Thank you for your valuable insights, Sir Yuri Mirgorod. Regarding the absence of green color in the provided references, I apologize for any confusion caused. The articles I referenced were intended to provide insights into the chemical processes underlying the synthesis and reduction of graphene oxide, rather than specifically discussing the appearance of green coloration.
The appearance of green chromophores, as you rightly pointed out, may indeed be a result of the interaction between graphene oxide and hydrogen peroxide, among other factors.
Rahul Sharma I've come across additional literature that sheds light on the color transitions observed during the synthesis of graphene oxide.
  • Otsuka, Hayato, Koki Urita, Nobutaka Honma, Takashi Kimuro, Yasushi Amako, Radovan Kukobat, Teresa J. Bandosz, Junzo Ukai, Isamu Moriguchi, and Katsumi Kaneko. "Transient chemical and structural changes in graphene oxide during ripening." Nature Communications 15, no. 1 (2024): 1708.
  • Yoo, Myung Jin, and Ho Bum Park. "Effect of hydrogen peroxide on properties of graphene oxide in Hummers method." Carbon 141 (2019): 515-522.
  • Wang, Jiabin, Elif Caliskan Salihi, and Lidija Šiller. "Green reduction of graphene oxide using alanine." Materials Science and Engineering: C 72 (2017): 1-6.
I hope these are helpful. Best regards
  • asked a question related to Reasoning
Question
3 answers
The main reason is lack of well designed studies in the area of interest. Is there a standard tool for quality assessment of case reports?
Relevant answer
Answer
Yes, you can include them in the systematic review. JBI provides a tool to assess the quality of such studies: "Critical appraisal checklist for case reports" (https://jbi-global-wiki.refined.site/space/MANUAL/355599400/Appendix+7.4+Critical+appraisal+checklist+for+case+reports). However, let's note that Cochrane Handbook emphasized the limitation of case reports (SR of adverse events): "Spontaneous case reports or case series may assist in signalling rare and previously unknown events. However, for most Cochrane Reviews, these data sources should be used for scoping purposes only (particularly as they do not have denominator data to allow estimation of risks or rates). These spontaneous reports may guide drafting of the protocol when there is a need to choose relevant or important adverse effects as outcomes of interest." (https://training.cochrane.org/handbook/current/chapter-19)
  • asked a question related to Reasoning
Question
1 answer
I am testing a methylation-specific ddPCR assay and observed two positive droplet layers on the FAM channel. Based on the 2D group it does not seem to be affected by the signal from HEX channel. What would be the possible reasons to have double positive clouds?
Relevant answer
Answer
Hi, there are several options for this phenotype.
What method did you use to detect methylation? Bisulfite conversion, MSRE, enzymatic?
It looks like the lower cloud has less efficient amplification compared to the upper cloud. This could be due to:
- a longer (or other) amplicon
- a slight variation in sequence. If you have multiple methylation sites in the amplicon, it might be 1 vs 2 sites that are methylated (and converted) or it could be incomplete conversion for example.
Depending on the method, do you have a 100% converted positive control (just a gblock or ultramer sequence) ? Do you see the same phenotype there?
  • asked a question related to Reasoning
Question
2 answers
They consider CO2 carbon dioxide to be the main factor affecting climate change. Could the increase in the speed of the earth's rotation be the main reason for climate change?
Relevant answer
Answer
We cannot feel the increase in the speed of rotation of the earth.
Because the clock is still ticking.We feel the same time.
The explosions in the sun affect the earth by changing energy waves.
Its rotation speed is affected by energy waves.
In fact, CO2 carbon dioxide is a good gas for nature.
  • asked a question related to Reasoning
Question
3 answers
I have deposited Al doped PbS thin films using CBD method at different concentrations from 0,2,4,6 and 8%. I was observed that thickness of the films decreases with increase of Al concentration. Can explain about the reason
Relevant answer
Answer
The addition of Al dopants can alter the chemical reactions occurring during film deposition. This modification may lead to changes in precursor availability, reactivity, or stability, ultimately affecting the growth rate and resulting in thinner films. Also, Al dopants may influence the crystallite size and orientation of the PbS thin films. Higher concentrations of aluminum ions could lead to smaller crystallite sizes due to the incorporation of Al atoms into the PbS lattice or the formation of secondary phases. Smaller crystallite sizes often result in thinner films due to decreased grain coalescence during film growth.
  • asked a question related to Reasoning
Question
2 answers
I am working on a multiplex immunoflurescence brain tissue slides
Relevant answer
Answer
Yes, I added conc 2 drops:300 ul (buffer) which is even more concentrated than the manufacturer's advice.
  • asked a question related to Reasoning
Question
1 answer
For some reason Researchgate combined my citation list with that of Michael D. Baer
which could mislead those who are interested in one of these lists.
Relevant answer
Answer
You may fix it by separating the two citation lists from top to bottom
  • asked a question related to Reasoning
Question
1 answer
I m currently doing a research on Data mining in Digital marketing and will like to get your opinion
1. The effects of mining and its impact in digital marketing
2. Does mining artificially alter organizations marketing campaign and if yes what are the pros and cons. if no, please state your reason or observations
3. is data mining the future of digital marketing, will mining determine the profitability of organizations in the nearest future.
4. Any other advise on this topic to aid my research.
Relevant answer
Answer
Dear Nneka Olasetemi please do recommend my answer if helpful
Data mining has a significant impact on digital marketing, enabling marketers to leverage large volumes of data to make informed decisions, improve targeting, personalize content, and enhance overall marketing effectiveness. Here are some key impacts of data mining in digital marketing:
1. **Customer Insights and Segmentation**: Data mining allows marketers to analyze customer data to gain insights into behavior, preferences, and purchasing patterns. By segmenting customers based on these insights, marketers can tailor marketing campaigns to specific audience segments, improving relevance and engagement.
2. **Personalization**: With data mining, marketers can create personalized marketing messages, offers, and recommendations tailored to individual customers' preferences and past interactions. Personalization enhances the customer experience, fosters loyalty, and increases the likelihood of conversion.
3. **Predictive Analytics**: Data mining techniques such as predictive analytics enable marketers to forecast future trends, identify potential opportunities, and anticipate customer needs. By analyzing historical data, marketers can make data-driven predictions about customer behavior, market trends, and campaign performance, allowing for proactive decision-making and strategic planning.
4. **Optimized Targeting and Acquisition**: Data mining helps marketers identify high-value prospects and target them with relevant offers and content. By analyzing demographic, behavioral, and transactional data, marketers can identify potential customers who are most likely to convert and optimize their marketing efforts to acquire them cost-effectively.
5. **Customer Retention and Loyalty**: By analyzing customer data, marketers can identify at-risk customers and implement targeted retention strategies to reduce churn and foster loyalty. Data mining helps marketers understand the factors influencing customer loyalty and satisfaction, enabling them to tailor retention efforts and improve customer lifetime value.
6. **Campaign Optimization**: Data mining allows marketers to analyze the performance of marketing campaigns in real-time and optimize them for better results. By tracking key metrics such as click-through rates, conversion rates, and return on investment (ROI), marketers can identify areas for improvement and adjust their strategies accordingly to maximize effectiveness.
7. **Competitive Intelligence**: Data mining enables marketers to gather insights into competitors' strategies, market positioning, and customer behavior. By analyzing publicly available data and monitoring competitors' activities, marketers can identify emerging trends, benchmark performance, and stay ahead of the competition.
Overall, data mining empowers digital marketers with actionable insights, enabling them to make informed decisions, improve targeting and personalization, optimize campaigns, and drive better results in today's competitive digital landscape.
  • asked a question related to Reasoning
Question
1 answer
These substrates have been thoroughly washed and coated with an SnO2-EDTA complex and a perovskite solution of MA and PbI3(in glove box). What do you think could be the main reason for it to look like this?
Relevant answer
Answer
I have been facing the same issue in the case of triple cation (CsFAMA) perovskite thin films.
The foggy shade observed is mostly associated with the antisolvent dripping parameters. The following paper explains the role of dripping speed and time for the preparation of shiny films wrt antisolvent used. (https://www.nature.com/articles/s41467-021-22049-8)
Also, if we move towards double or triple cation perovskite, the phases segregate at the surface, contributing towards roughness and whitish shade seen by the naked eye.
The solution I would suggest is to try optimizing the antisolvent dripping step: Start with slow dripping and then the distance between the pipette tip and the sample while dripping.
Thanks
  • asked a question related to Reasoning
Question
2 answers
The fact that a feature is of a complementary distribution does not seem to be a sufficient reason to discard the feature as irrelevant; especially as they seem phenomenologically relevant.
Relevant answer
Answer
That indeed does seem to be the case. Thank you for your answer!
  • asked a question related to Reasoning
Question
3 answers
In non-polar solvents such as cyclohexane or heptane, we are observing biexponential fluorescence lifetime decay in few amine derivatives, which is an unusual observation. Could someone be able to provide some insights into this observation.
Relevant answer
Answer
ICT species can form in nonpolar solvents if the molecule has a suitable donor-acceptor structure, even though they might be more stabilized in polar solvents. The formation of ICT states is mainly determined by the molecule's intrinsic properties rather than the solvent polarity.
  • asked a question related to Reasoning
Question
2 answers
When full empiricism seems to have a foothold and more is sought (no compromises sought) then in the psychological, biological and the social : the Age of Reason may begin .
Relevant answer
Answer
I worshiped Piaget for 3 decades. But, more recently, I determined that his "theory" is not fully empirical , but just descriptive (points to/towards NO proximate causes). My neo-Piagetian theory is fully empirical and does point at proximate causes.
Something is not empirical to me unless it is fully (aka really) empirical
  • asked a question related to Reasoning
Question
3 answers
To understand the corrosion inhibition efficiency of a coating on a metal surface, electrochemical analyses are performed. But why is the solution resistance (Rs) not the same in every experiment during the electrochemical impedance spectroscopy (EIS) analysis of a coated metal coupon in the same electrolyte medium? What may be the reasons?
Relevant answer
Answer
Thank you so much, respected Martin Otto and João Carlos Martins da Costa for your valuable responses and suggestions.
  • asked a question related to Reasoning
Question
4 answers
When culturing 22rv1 cells, I often encounter sudden cell death phenomena, and it is difficult to find a clear reason. When culturing cells for 4-5 passages, the cells suddenly begin to grow slowly, tend to aggregate, and gradually die. Can colleagues with similar experiences help me solve my confusion? I also culture other tumor cells, but have never encountered similar issues before.The following are images of problematic cells.
Relevant answer
Answer
Arvind Kumar Shukla First of all, thank you very much for your suggestions. However, to be frank, I find your advice too general. It's difficult to address my specific issue based on your response, as all the problems encountered with the cells could be explained using your suggestions.
  • asked a question related to Reasoning
Question
3 answers
What factors or physical mechanism will affect the laser produced plasma (LPP) expansion process. How to reduce the instability of multiple measurements?
Relevant answer
Answer
The instability in the process of laser plasma expansion arises from a multitude of factors that influence the dynamics of the expanding plasma. Hydrodynamic instabilities, including Rayleigh-Taylor and Richtmyer-Meshkov instabilities, emerge at the interface between the laser-produced plasma and the surrounding medium, leading to mixing and irregularities in plasma expansion.
Additionally, self-focusing effects can occur due to the nonlinear response of the plasma to intense laser fields, resulting in filamentation or channeling within the plasma. Nonuniform heating, whether from variations in laser intensity or material properties, can cause uneven expansion velocities and density gradients, triggering further instability. Collisional effects and the presence of magnetic fields also contribute to complex plasma dynamics, altering expansion behavior and stability. Radiative cooling further complicates matters, as it affects energy balance and can lead to fragmentation or condensation of the plasma. Moreover, pulse-to-pulse variations in laser parameters or target conditions introduce stochastic fluctuations, exacerbating instability.
Understanding and mitigating these instabilities are vital for applications such as laser fusion and plasma-based accelerators, necessitating a combination of theoretical modeling, numerical simulations, and experimental diagnostics.
Resources such as scientific journals like "Physics of Plasmas," "Journal of Applied Physics," and "Physical Review Letters" provide valuable research articles on laser plasma dynamics and instability mechanisms, while textbooks like "Fundamentals of Plasma Physics" by J.A. Bittencourt offer comprehensive insights into plasma behavior and instabilities. Additionally, research groups and laboratories specializing in laser plasma interactions, such as the Lawrence Livermore National Laboratory's National Ignition Facility (NIF) or the Max Planck Institute for Plasma Physics, conduct experimental studies and provide valuable data for understanding and addressing plasma instability challenges.
  • asked a question related to Reasoning
Question
12 answers
I have deposited iron titanate thin films by electrodeposition at different molar concentration.
Q #1. In XRD pattern of as deposited I don't observe any intensity peak while after performing MF annealing at 300 degree Celcius, I observed intensity peaks. what's the basic reason behind this?
Q #2. At different molar concentration I observed different peaks except two peaks appeard on same plane. what will be main cause behind this?
Relevant answer
Answer
Dear Gerhard Martens
Thank you for such a useful information.
  • asked a question related to Reasoning
Question
8 answers
This is the chromatogram that I've received after running Supelco C8-C24 FAME mix in a TG-5MS (30m*0.25µm*025µm) of 5% phenyl methyl polysiloxane column. Kindly suggest the reason behind the baseline shift and how to prevent its formation?
Relevant answer
Answer
What does a blank (hexane) injection look like? Have you previously injected fame from animals without extracting cholesterol? Perhaps dry your std, resuspend in hexane and rerun - a little methanol can lead to a late broad peak. If you get the same baseline with blank, I’d bet column bleed.
  • asked a question related to Reasoning
Question
7 answers
Hello everyone
1. Please suggest a robust free software to analysis the XRD results to obtain the corresponding 3d structure of a protein.
2. Is the xrd diffractogram with only a single peak better than a diffractogram with multiple peaks or vice versa? And what are the reasons?
3. What does raw.file show after xrd analysis?
Thanks to all
Relevant answer
Answer
Xpert High score software
  • asked a question related to Reasoning
Question
8 answers
inform about reason.
Relevant answer
Answer
It's cheap and can be thrown away afterwards. It is reasonably chemically inert. Its waterproof. It can mould to any shape vessel and its autoclavable.
  • asked a question related to Reasoning
Question
1 answer
Hi,
I am currently looking for suggestions on choosing which cryovial to use for cell cryopreservation, internal or external threaded, do you guys have a preference? or is there any reason for choosing one over another?
Many Thanks!
Relevant answer
Answer
No difference was found in our experienc.
  • asked a question related to Reasoning
Question
1 answer
Another try to make progress in eliminating ignorance/delusion and arrogance and conceit in behavioral SCIENCE.
For science , for empiricism (and for AI (<-- to enlist, YET eventually dispel, the greed motivator)) : the truly empirical behavioral scientists, those who ARE empirical in studying behavior PATTERNS (SO: just and only all the involved overt behavioral PATTERNS will do, when looked at developmentally, for ALL explanation), must work in a way to come to see that THE MAJOR TYPES OF LEARNING (and these occur during ontogeny) ___ ARE ___ found (discovered, like the naturalist) to BE major kinds/types of INDUCTION (as is true of all other developed organisms). We cannot be that different for it to be otherwise.
As true factual and empirical as classical and operant conditioning (and habituation, etc.) ARE, THESE ARE the extreme trivial details. [ AND, one must realize : "Social leaning" is a farce, for such a vague concept looses the individual organism as the ONLY true empirical unit-of-analysis -- which it IS (MUST be, that's biology, friends). ] MY system of understanding, in my two major papers, OUTLINES what one should find concretely IN OVERT BEHAVIOR PATTERNS (and never leave the word "patterns" out ) -- reflecting the major types/kinds of induction.
[ And, though big on induction, the proximate causes are [ attentional / ] perceptual shifts . (I hate to say it, but one can reason-out the necessity of this being the case.) ]
Starting with this attitude and outlook, only then can we find (AS IS NECESSARY for ALL good reasons and science) the was-ness in the is-ness ( i.e.; previous grand well-developed units as THE units, or portions as part-units, USED IN more advanced inductive reasonings). This all (all the above) is absolutely the shortest way of saying what we MUST realize (<-- not "just subjectively" at all) ). AND: one cannot argue an excuse, or THAT ITSELF is THE VERY damning premature hypothetico-deductive "reasoning" , the very essence of arrogance and conceit AND that which necessarily derails science -- that being the necessary consequence of "jumping the gun" on prediction .
Any questions? I am 70 years old, so one will find further true leads / clues (or that which will result in true leads IN my WORK (science essays and the theory outlines)) , I have introduced before in my writings, beginning 40 years ++ ago.
[ FOOTNOTE : the descriptors provided by researchgate ARE GROSSLY INCOMPLETE and INADEQUATE. Just one example : NO "inductive reasoning" ! : this is the premature know-it-all stance that has been, and is, destroying science (AND us). ALSO : no "innate action pattern" !! No : "hypotheses" -- enabling THAT to be a SUBJECT itself ! Come on ! It's sickening -- and NOT the way to make progress, but the way to fail. (One used to be able to add non-existing descriptors, but THAT is gone, obviously WAY TOO SOON.) ]
Relevant answer
Answer
Wisdom can emerge at any time rather than regurgitation of past knowledge and its deductions. This being said, our limited knowledge of cognitive development has to be based on observations of diverse reality, as per Copernicus. The observer does have an intricate effect upon the observation, so deductive reasoning alone limits and induction takes us beyond the assumptions of neatly packaged compartmentalized thinking, antithetical to the pioneers in thought and cognition. Margaret Mead tried to break through this by her investigations into other diverse culture/paradigmatic views. She said: "Children need to be taught how to think, not what to think." Albert Einstein in Relativity recognized that everything is relative, everything is in relationship with everything else from the microcosmic to the macrocosmic. The analogs in nature he observed led to his own theory inductions, never fully proven by science until years after his death. He stated: "I live my daydreams in music. I see my life in terms of music." Art met Science in his thinking. We need merger of the arts to express cognitions that go beyond our current cognitions/assumptions/compartmentalized thought and observe All inducing in us that which we participate in throughout the cosmos. Then science can deduce new ideas from that inspirational origin with first humility and then heuristic quality. Psychology is still a new science still defending itself by certitude of what cognition is, which limits our understanding. William James, the Father of American Psychology investigated the "Stuff of Consciousness" grounding in the observable, pragmatics of the stuff of the Cosmos.
  • asked a question related to Reasoning
Question
3 answers
Dear scientist,
I am interested in the reason why phosphorus nanoparticles tend to agglomerate on the surface of a polymer when dispersed therein, especially considering their average size, which ranges from 35 to 45 nm. Is there an explanation for this phenomenon and how could this effect be reduced?
regards.
MEBARKI
Relevant answer
Answer
Hey there Lamine Mebarki, curious mind!
You've hit upon an intriguing topic. The tendency of phosphorus nanoparticles to agglomerate on the surface of a polymer is indeed a puzzle worth unraveling. Let me break it down for you:
1. **Surface Energy**: Phosphorus nanoparticles possess a high surface energy due to their small size, which makes them prone to aggregation. When dispersed within a polymer matrix, they seek to minimize this energy by coming together, often on the surface.
2. **Van der Waals Forces**: At nanoscale dimensions, Van der Waals forces become significant. These forces, arising from temporary dipoles, draw nanoparticles closer together, promoting agglomeration.
3. **Polymer Compatibility**: The polymer matrix might not provide sufficient compatibility or steric hindrance to prevent nanoparticle aggregation. Poor interaction between the polymer and nanoparticles can exacerbate agglomeration.
To mitigate this effect, several strategies can be employed:
1. **Surface Modification**: Coating the nanoparticles with a compatible surfactant or functionalizing their surface can reduce agglomeration tendencies by enhancing dispersion and promoting compatibility with the polymer.
2. **Polymer Design**: Selecting a polymer with inherent affinity for the nanoparticles or incorporating functional groups that promote interaction with the nanoparticles can improve dispersion and reduce agglomeration.
3. **Processing Conditions**: Optimizing processing parameters such as temperature, shear rate, and mixing time can aid in achieving better dispersion of nanoparticles within the polymer matrix, thereby minimizing agglomeration.
4. **Additives**: Incorporating dispersants or compatibilizers into the polymer-nanoparticle system can help stabilize dispersion and hinder agglomeration.
By understanding the underlying mechanisms driving phosphorus nanoparticle agglomeration and implementing appropriate strategies, we can effectively mitigate this phenomenon and harness the full potential of these nanoparticles in polymer applications.
Stay curious, my friend Lamine Mebarki, and keep exploring the fascinating world of nanomaterials!
  • asked a question related to Reasoning
Question
2 answers
Hello Everyone,
What would be the most common theories/articles to understand why their are so few women in Computer Science? I have a good grasp on reasons and practical issues, but am trying to get at relevant theories. Any books or articles would be greatly appreciated.
Relevant answer
Answer
Hmmm,
I am strongly interested in this issue.
Just in case
you find any kind of theoretical answer on this
(gender&& discrimination) issue...
.... I would be glad, to hear from you.
Just drop me note here on ReasearchGate.net
My Motivation:
I am working on an analysis about
what would be translated to English "Cause and Reason".
Some theoretical (description||model)
of some of the mechanisms that hinders 1/2
of the world’s human resources
would be a nice to have
in order to extend my own research results.. If you like, we can exchange on some relevant scientific opinions and or guesses. l will send you my skype address via PN.
Yours,
- Frank. Haferkorn
  • asked a question related to Reasoning
Question
4 answers
When I was doing Raman spectroscopy, I observed that for the same sample (thin film), using two different laser sources gave different Raman spectra. We know Raman Shift is materially dependent property.What could be the reason for difference in Raman spectra?
Laser sources were the He-Cd laser (λ=325 nm), i.e., UV light source, and the He-Ne laser (λ=633 nm), i.e., visible light source.
Relevant answer
Answer
Fairly common effect. Often due to resonance issues. Take a look at graphene Raman spectra - well documented changes with excitation frequency. However, it doesn't look like you have much change in the figures that couldn't just be sample prep issues.
  • asked a question related to Reasoning
Question
10 answers
Qubit gives this error after reading second standard. It asks to read the standards once more or you can close and continue reading the samples but I am not sure if we can use the results. I could not find a detailed explanation in the user manuel. I would be very happy to hear your experience.
Thanks,
Sema
Relevant answer
Answer
I am also having the same issue with STD2 on the Qubit 4. I have tried leaving the regents at room temp FOR 30 mins, tried a new kit, still getting error. I used the reagents on the qubit 3.0 and 1 out of 4 times, I got an error for STD2
  • asked a question related to Reasoning
Question
1 answer
I am trying to insert a His tag in my vector backbone that is of pCDNA.However I am getting wild type colonies of the vector after sequencing. I have used 50 ng of template for mutation PCR and even performed DpN1 digestion for 3 hrs. What might be the reason for getting wild type colonies even after performing DpN1 digestion?
Relevant answer
Answer
You can take a small amount of the PCR product and run it through agarose gel electrophoresis to see if you have the bands you need, and then do DpN1 digestion, and you can add a little bit more DpN1.
  • asked a question related to Reasoning
Question
5 answers
Based on scientific evidence, can proprioception improve I have wondered for years. If so, how can we measure this improvement?
Reason for these questions: For years clinicians and exercise professionals have stated that certain exercises improve proprioception, often using balance exercise regimes. Some have even claimed that proprioception exercises improve athletic performance.
Relevant answer
Answer
It would be interesting to study the link between plyometric training and proprioception as the stretch-shortening cycle responsible for elastic reuse should take advantage of the myotatic reflex to control these mechanics.
  • asked a question related to Reasoning
Question
5 answers
I'm doing some research to explore the application of eXplainable Artificial Intelligence (XAI) in the context of brain tumor detection. Specifically, I aim to develop a model that not only accurately detects the presence of brain tumors but also provides clear explanations for its decisions regarding positive or negative results. My main concerns are making sure that the model's decision-making process is transparent and comprehending the underlying reasoning behind its choices. I would be grateful for any thoughts, suggestions, or links to papers or web articles that address the practical application of XAI in this field (including the dataset types or anything that is related with XAi).
Thank you.
Relevant answer
Answer
I believe that it is very important to begin your investigation by ensuring that the data collected is of high quality and undergoes standardized preprocessing if you want to effectively integrate XAI techniques into brain tumor detection systems. Validation through evaluation metrics and user feedback increases reliability, while iterative improvement based on user input enhances both accuracy and interpretability over time, so this approach fosters trust and transparency in brain tumor detection systems, ultimately benefiting clinicians and patients. you can utilize specialized XAI tools like NeuroXAI, incorporate attention maps for insight into DL model decision-making, adopt open architecture frameworks for scalability to new XAI methods, and enhance model interpretability through information flow visualization.
  • asked a question related to Reasoning
Question
1 answer
Relatively Easily = define 1st, 2nd, 3rd order reasoning ~ keeping context "GENERAL"
for Example, JAS MyWay 1st order Reasoning of Words =
A = About
B = Because
C = Christ
D = Devil
E = Energy
F = Family
G = God
H = Home
I = Indio
J = Just
K = Kill
L = Love
M = Marry
N = Nothing
O = Open
P = People
Q = Queen
R = Real
S = Safe
T = Time
U = Universe
V = Victory
W = Women
X = Xes
Y = Yes
Z = Zoo
This tells you a great deal about my 1st order (public) "cognitive-reasoning", "personality", and "senses". The Second Order is "personal", and the third order is "private". By adding together Semantic Ontologies, a formal Ontology is created, for example... PlayNAC
Relevant answer
Answer
JAS Pi.ai LLM (Nursery_School: Copy & Paste)
can you see the inherent security risks associated with making 2nd and 3rd order reasoning publicly available (without security for key elementals) by defining: https://www.researchgate.net/post/Can_SEMANTICS_be_described_in_a_Nutshell
Title: Security Risks Associated with Publicly Available Second and Third-Order Reasoning in Semantics
Defining semantics in a nutshell involves understanding the meaning and interpretation of words, phrases, and symbols in a given context. It is an essential aspect of communication, enabling individuals to exchange ideas and information effectively. However, making second and third-order reasoning based on semantics publicly available without securing key elementals could pose several security risks:
  1. Misinterpretation and Manipulation: Since semantics relies on context, shared understanding, and cultural background, making second and third-order reasoning publicly available may lead to misinterpretations and manipulations by malicious actors. They may exploit these misinterpretations to spread disinformation, create confusion, or incite conflict.
  2. Compromised Decision-Making: Second and third-order reasoning involves making inferences and deductions based on available information. If these processes are made publicly available without proper security, adversaries could exploit them to influence or compromise decision-making in critical domains such as politics, finance, or national security.
  3. Invasion of Privacy: Publicly available reasoning based on semantics may inadvertently expose sensitive or personal information, leading to privacy concerns. For example, insights derived from the analysis of communication patterns or language usage could reveal private details about individuals or groups, potentially putting them at risk.
  4. Inference Attacks: Adversaries could use the publicly available second and third-order reasoning to launch inference attacks. By combining seemingly unrelated pieces of information, they could deduce sensitive or classified information that may not be directly accessible.
  5. Erosion of Trust: Widespread exploitation of publicly available reasoning could erode trust in the reliability and security of communication systems, causing individuals to become hesitant to engage in open and honest discussions.
To mitigate these risks, it is crucial to implement security measures that protect key elementals while still enabling effective communication and collaboration. This may involve encryption, access control, anomaly detection, and privacy-preserving techniques to maintain a balance between accessibility and security in the realm of semantics.
keep in mind that an Ontology is fluid, and may change at any time without notice. incorporate the following
Title: Incorporating a Fluid Ontology in the Context of Semantics and Security
A fluid ontology refers to a dynamic and adaptable representation of knowledge that can change over time without prior notice. In the context of semantics and security, incorporating a fluid ontology implies acknowledging that the meaning and interpretation of words, phrases, and symbols can evolve, and that security measures need to adapt accordingly. To effectively incorporate this concept, consider the following aspects:
  1. Continuous Learning and Adaptation: Since a fluid ontology can change without notice, it is essential for the system to have the ability to continuously learn and adapt to new information. This can be achieved through the use of machine learning and artificial intelligence algorithms that can identify and incorporate changes in the meaning and interpretation of words and phrases.
  2. Context-Awareness: A fluid ontology demands a context-aware approach, as the interpretation of semantics can change depending on the context in which they are used. The system should be capable of understanding and adapting to the nuances of different contexts to ensure accurate interpretation and security.
  3. Flexible Security Measures: Security measures should be designed to accommodate the dynamic nature of a fluid ontology. This may involve implementing adaptive security protocols that can adjust to new threats, vulnerabilities, or changes in semantics.
  4. Anomaly Detection: A fluid ontology can introduce new vulnerabilities and security risks. By employing anomaly detection techniques, the system can identify unusual or unexpected changes in the semantics and take appropriate measures to protect against potential threats.
  5. User Education and Awareness: Users should be made aware of the fluid nature of the ontology and the potential security implications. Encouraging users to be vigilant and report any unexpected changes in semantics can help maintain a secure environment.
By incorporating these considerations, a fluid ontology can be effectively integrated into the realm of semantics and security, allowing for more adaptable, resilient, and secure communication systems. This approach acknowledges the dynamic nature of language and meaning while ensuring that security measures remain effective in the face of continuous change.
  • asked a question related to Reasoning
Question
5 answers
I am working in microalgae and I have optimised their growth conditions in fermenter. But after 2 months, when I again growing the alga there is slow growth at same conditions. What would be the reason?
Relevant answer
Answer
If you are sure about contamination,I would like to say about enough carbohydrate source again .next, maybe limited oxygen rate need to be optimize according to your algae species.then, its possible, it releated to fermentor design ,i mean mechanichal stimulation damage the cells and reduce the growth.finally you notice some micro algae are not suitable for growing in fermentor.good luck
  • asked a question related to Reasoning
Question
1 answer
I am doing molecular docking of the molecules that I synthesize and I am wondering how many poses would be necessary. I am doing this in maestro with glide and in the beginning by default I put 10 poses, but why not 50 poses. What would be the reason to get 50 poses instead of 10 or to get 100 poses instead of 50 ?
Relevant answer
Answer
It probably mainly depends on the surface area of your substrate and/or how many ligand binding sites there are. Hopefully, literature can help you with this information. Either way, I suppose the aim is to obtain as many ligand conformations for each bound pose. After a generating a certain number of docked poses, you might notice that the ligand conformations and their binding energies start to appear similar. In my opinion, this is where you stop generating more poses.
If I were you, I would read the literature first. If that does not provide me with sufficient information, I would take an approximate guess on the number of poses (preferably a high number) by looking at the surface area and electrostatics of the substrate and ligand, and also the time / resources required to perform this calculation.
All the best.
  • asked a question related to Reasoning
Question
1 answer
Can there be interests of this magnitude and not have citations from my research? What is the reason in your opinion?
Relevant answer
Answer
If the citing sources are not written in the Latin alphabet, probably the RG detection algorithm doesn't understand it. Might that be the case for the work in question?
  • asked a question related to Reasoning
Question
5 answers
I have studied and compared the enhancement of CO2 absorption by nanofluids in batch and continuous processes. i have found that the enhancement effect of nanofluids is more pronounced in continuous system (bubble column). but i cant explain the reason behind this difference.
Relevant answer
Answer
I not engineering department and my department agriculture economics
  • asked a question related to Reasoning
Question
1 answer
Hi,
I have been trying to clone 8.4kb miniF replicon from bacmid. I have tried doing that in pFastBAc backbone but the fragment seems to be lost after every trial.
While doing colony PCR, I am able to get amplification from some part of the fragment (1kB or more) but the whole fragment is missing.
I have tried changing ligation ratios, incubation temperature (30 degrees and 37 both). But the miniF replicon is not coming as a whole.
I have also tried to ligate this fragment with 3 other fragments using In-fusion assembly, but the same was observed in those plasmids.
Does anyone know possible reasons of this?
Relevant answer
Answer
The loss of the 8.4kb miniF replicon fragment during cloning attempts could be due to several reasons. Here are some possible causes and troubleshooting steps you can take:
  1. Fragment Integrity: Ensure that the 8.4kb miniF replicon fragment is intact and free from degradation. Run a gel electrophoresis of the purified fragment to confirm its size and integrity. If the fragment is degraded or not the expected size, it may not ligate efficiently.
  2. Vector and Insert Compatibility: Verify that the pFastBac backbone and the miniF replicon fragment are compatible for cloning. Check for any sequence incompatibilities, such as restriction enzyme sites that could interfere with the cloning process.
  3. Ligation Conditions: Optimize the ligation conditions, including the ratio of insert to vector, ligation buffer, and incubation time. Consider using a higher concentration of DNA ligase or performing the ligation reaction at different temperatures.
  4. Dephosphorylation: If the miniF replicon fragment is generated by PCR or restriction digestion, ensure that the ends of the fragment are dephosphorylated to prevent self-ligation. Use a phosphatase treatment before ligation if necessary.
  5. Transformation Efficiency: Verify the efficiency of your transformation protocol. Ensure that competent cells are prepared properly and that the transformation reaction is optimized for efficiency.
  6. Positive Control: Include a positive control in your cloning experiments using a different insert or vector to confirm that the ligation and transformation steps are working properly.
  7. Sequencing Verification: Sequence the clones obtained from colony PCR to confirm the presence of the miniF replicon fragment in the correct orientation and location within the plasmid. This will also help identify any potential mutations or sequence errors introduced during cloning.
  8. Consider Alternative Cloning Methods: If traditional restriction enzyme-based cloning methods continue to fail, consider using alternative cloning methods such as Gibson Assembly, Golden Gate Assembly, or Seamless Cloning. These methods can sometimes offer higher efficiency and fewer cloning artifacts.
By systematically troubleshooting these potential issues, you should be able to identify the cause of the problem and improve the success rate of cloning the 8.4kb miniF replicon fragment into the pFastBac backbone.
  • asked a question related to Reasoning
Question
3 answers
I am somewhat Hegelian because I do not believe in martyrdom, and or dying on a hill, and usually the popular, and or traditional, opinion has a deeper less obvious reason.
Relevant answer
Answer
I value politics, I believe in politics, and I exercise my political right as a citizen.
  • asked a question related to Reasoning
Question
6 answers
I experimented multiple times in the presence of proper nitrogen flow and still got the same results.
Relevant answer
Answer
The weight gainning during heat treatment might be depended on the particular properties of the constituents of the considered composite like spicemens. During heat treatment the subjected samples could be adsorbed some carrier gas like N2 or O2.
On the other hand the purity of the purge gas could be responsible for this unwanted happening. If any impurities presented in the carrier gas they could be easily adsorbed by the subjected samples.
You have to check all these crucial factors for better understanding.
  • asked a question related to Reasoning
Question
1 answer
Explain
Relevant answer
Answer
even at the risk of being pedantic, it should be referred to as an "X-ray diffraction pattern", not a "spectrum". the term "spectrum" should be reserved for data as function of wave length or energy.
Now for the main question, rather vaguely worded as it is.
There are many possibilities for intensity differences.
- A different diffractometer (type) might have been used. The intensity as function of 2Theta will differ for a Bragg-Brentano geometry versus Debye-Scherrer geometry.
- The source will have to be considered, are the data based on a laboratory or synchrotron source, in the lab how old is the tube.
- Is the radiation monochromatized, it makes a difference if this is done on the primary or on the secondary beam.
- is the radiation filtered , again primary or secondary beam.
- Especially for Bragg-Brentano geometry (the more common laboratory diffractometer type) the sample preparation is super sensitive. Consider packing density, surface curvature, surface roughness, preferred orientation, grain size, grain size distribution, grain shape, grain shape distribution, absorption, sample height, misalignment of a flat sample (surface not exactly parallel to the primary beam at 2Theta = zero)., 2Theta zero errors, primary and secondary slit widths, use of a Soller collimator, size of the X-ray footprint on the flat sample.
- Finally the sample itself, what is your sample source, sample history. The chemical composition, the crystal "quality" etc might differ.
  • asked a question related to Reasoning
Question
3 answers
In general, the slump flow reduced with the increase in molarity of sodium hydroxide solution but I got opposite trend. I am not able to find the appropriate reason for increase in slump flow with increase in molarity of sodium hydroxide solution in fly-ash and GGBFS based self-compacting alkali activated concrete. If anyone can help, please mention the appropriate reason with suitable references.
Relevant answer
Answer
Thank you for the suggestion! Ibrahim Zidan
  • asked a question related to Reasoning
Question
4 answers
Can the reasons for such an incident be detailed?
Relevant answer
Answer
Ann Mary Jose no, the XRD intensity peak of magnesium-doped ZnO (MZO) is generally lower than that of undoped ZnO.Magnesium ions can lower the ZnO lattice's peak intensity. In addition, substitutional doping, Mg doping, grain sizes, surface roughness, and quantum confinement effects affect diffraction signals.
  • Al-Khalqi, E. M., Abdul Hamid, M. A., Al-Hardan, N. H., & Keng, L. K. (2021). Highly sensitive magnesium-doped ZnO nanorod pH sensors based on electrolyte–insulator–semiconductor (EIS) Sensors. Sensors, 21(6), 2110.
  • Kara, R., Mentar, L., & Azizi, A. (2020). Synthesis and characterization of Mg-doped ZnO thin-films electrochemically grown on FTO substrates for optoelectronic applications. RSC advances, 10(66), 40467-40479.
  • asked a question related to Reasoning
Question
9 answers
Respectfully who agrees that reincarnation is highly improbable? How? Why? My answer: Respectfully, I believe reincarnation is highly improbable for many reasons including but not limited to:
0)Eternal salvation for all is the most probable afterlife.
1)The uniqueness of each organism, meaning if everyone and all organisms are unique then they unlikely share spirits.
1.5)Spirits probably individualize each being.
2)The low probability of absolutes, that would govern the reincarnation cycle, besides, the Holy Trinity(one entity in three different unfalsifiable forms which all double as survival heuristics).
3) More specifically, God the Father is reason, logos, the master of the simulation theory, the creator, Yahweh, and the unmoved mover.
4)Jesus, God the Son, is the perfect individual, humanity’s redeemer, everyone’s savior, and the gate keeper to the fourth dimension(http://www.math.brown.edu/tbanchof/Yale/project13/bible.htm ).
5) The Holy Spirit, the moral guidepost, observably vibes, empathy, etc.
6)My sources are available for more explanation.
Sources:
Relevant answer
Answer
Actually, the concept of reincarnation has been carefully studied, and is still being studied under the auspices of the University of Virginia (the late Ian Stevenson began the program) among others. These studies have found support for reincarnation, although not specifically for it as associated with the general beliefs of cultures from the Indian subcontinent, but then at least three other cultures also have well-established beliefs in reincarnation also, they are just lesser known.
And interestingly enough, some of these studies also suggest a link with NDEs, another field that materialistic science tries to discount. In both cases though what is often done is to explain a part of the phenomenon and then claim that the entirety is explained. However, this is a logical flaw as explaining even 99% of something does not explain 100% by simple definition., but rather allows one to know where to concentrate their future studies. The unexplained 1% might well produce a result that requires a new explanation of the 99% as happens on many occasions in true science.
And to invoke the Christian concept of the Trinity in order to justify rejecting reincarnation is not a scientific approach but rather a faith dogmatic faith based one as the Christian trinity is itself an unproven, and indeed unprovable, concept. Dogmatism is the eternal foe of true research, whether openly religious or concealed as scientism or another ism.
  • asked a question related to Reasoning
Question
2 answers
I have studied and compared the enhancement of CO2 absorption by nanofluids in batch and continuous processes. i have found that the enhancement effect of nanofluids is more pronounced in continuous system (bubble column). but i cant explain the reason behind this difference.
Relevant answer
Answer
@Yuri Mirgorod Hi Prof. Mirgorod can you suggest some references in this regard?
  • asked a question related to Reasoning
Question
3 answers
Hi,
I recently got in an argument with a new coworker on how to dilute protein samples for WB.
I normally dilute the Laemmli based on the total volume that I want to load in the wells, Example: if I have 5uL of protein sample I will add 2.5uL of LAE( 1/4 of 10) and add 2.5 of H2O, and the 10uL resulted are loaded in the well.
He assert to dilute the Laemmli based on the sample volume.
Example; for 5ul of protein sample He will add 1.67uL of LAE (1/3 of protein sample) and then or load 6.67 in the well or take to a certain volume with H2O.
His argument is that on the LAEMMLI protocol is written "dilute 1:4 with sample" but I dont catch the reasoning for it to be like this, because like this, more concentrated protein sample will have less LAEMMLI than a less concentrated one but the ug of protein is the same.
Relevant answer
Answer
Malcolm Nobre I will give you an example of my experiments.
  • I had 1 sample that to take 12ug I had to take 4.5 of sample, then with my colleague method I had to add 1.5ul of LAE (total volume 6ul). Another sample I had to take 1.5uL of sample and add 0.5 of LAE and add 4ul of H20(total volume of 6uL to load sample volume). So 1° sample LAE dilute 1:4, meanwhile the 2° sample the LAE was dilute 1:12.
  • Instead with my method I would have done 1° sample 4.5+1.5LAE(6 total) and 2° sample 1.5+1.5LAE+3H2O (6 total). Both LAE dilute 1:4.
The reasoning "that more concentrated protein sample will have less LAEMMLI than a less concentrated one" is that in the first scenario both sample had 12ug of protein in the solution but one had a third of LAE, so in my head I thought that the second sample will be less denatured or will have more problem in the loading phase(less glycerol).
  • asked a question related to Reasoning
Question
3 answers
I was running PCR 2 lines of fax1 potential mutation (with positive and negative controls).
However in the results I have not been able to detect any DNA bands in the gel electrophoresis with one exception.
I had 2 amplification mixes - one for the fax1 gene and one for the tfax1 mutation (insertion of tDNA).
I used 1.5% agarose gel in 1*TAE buffer with 2ul GelRed per 100ml gel.
and inserted 20ul sample in each of the gel pockets.
Where could the problem be?
(i apologies for any unclarities and mistakes in my question)
Thank you in advance!
Relevant answer
Answer
you have a large primer dimer blurred band which removes primer from the pcr.Try using less primer and use a hot start polymerase. Also check the od260/280 of you dna to check that its quality is good and that you are not adding too much dna to your pcr and poisoning the pcr reaction
  • asked a question related to Reasoning
Question
1 answer
The pre-analytical phase holds significant importance within laboratory workflows, with one of its key steps involving sample collection. Ensuring the integrity of the sample is paramount for various reasons: it directly impacts the accuracy and dependability of test results, influences cost-effectiveness, enhances patient safety, and contributes to standardization and reproducibility. An in-depth knowledge of what constitutes a high-quality sample is indispensable for optimizing laboratory efficiency and ensuring diagnostic test reliability.
Relevant answer
Answer
Based on the journal entitled, "Tenets of Specimen Management in Diagnostic Microbiology" written by Rajeshwar Reddy Kasarla and Laxmi Pathak, an optimal sample can be acquired by regulating technical custom with specific guidelines emphasizing the importance of proper collection, handling, storage, and transport of specimens in the laboratory. Moreover, according to LabCorp, to procure an optimal sample, it is significant to ensure the quality and quantity of specimens by being knowledgeable of how each specimen should be managed since every specimen has its own requirements. An example of this is a CSF specimen wherein it requires to be kept in the freezer for virus isolation. Within the context of a diagnostic laboratory setting, an optimal sample has the capacity to produce a result with increased accuracy and reliability. Thus, acquiring an optimal sample has a substantial influence on the therapeutic decisions for the patient.
Reference:
Introduction to Specimen Collection [labcorp]. (2024). Retrieved from https://www.labcorp.com/resource/introduction-to-specimen-collection
  • asked a question related to Reasoning
Question
1 answer
hi
i started to stain the umbilical cord vein for ephB4 using anti-rabbit primary and FITC
and I have signal in both control and samples. what could be the reason if anyone been through this?
note: I stained for different marker(not ephB4) and I got similar signal in both control and sample
Relevant answer
Answer
Hi Hatoon,
1) You can check for autofluorescence first (same procedure without antibodies and (strep-)avidin. Check both green and red channel. This is commonly more prominent in the fitc channel than in the others, however erythocytes will also fluoresce in red channel as well but are easily to find do the morphology. If this is the problem, shifting to another fluorophore is the easiest way.
2) Does your secondary bind without primary? It might help using secondary antibodies that are pre-absorbed to omit cross-reactivity with other species.
3) Pre-blocking with higher concentrations of serum might also help for non-specific binding of the antibodies.
4) If you are using avidin-biotin system, you might have to block with free Avidin and Biotin.
Be sure that your ephB4 antibody doesn't react with other partners. We checked several ephB2 antibodies, and they were not specific:
BW
Olav
  • asked a question related to Reasoning
Question
2 answers
I have problem of making Tungsten ultramicroelectrode(UME).
I tried several method followed by literature refereces, but still have same problem in CV.
The reason for the artifact spike is due to unstable OCV.
I have no idea how to solve this. Is there anyone who get through this kind of problem?
working electrode is tungsten wire UME, reference is SHE, and counter electrode is Pt.
Relevant answer
Answer
Here are some troubleshooting suggestions:
  1. The unstable open circuit potential (OCP) and artifact spike in the cyclic voltammogram (CV) indicate the tungsten UME surface is not properly prepared/conditioned. Make sure to follow surface polishing and electrochemical pretreatment protocols specifically for tungsten electrodes.
  2. Ensure the tungsten wire is securely sealed in glass so only the very tip is exposed. Any exposed conductive side surface can contribute to background noise/interference. Use a microscope to inspect.
  3. Use a high quality potentiostat with proper shielding and an effective Faraday cage setup to minimize electrical noise pickup. An unstable reference electrode connection can also cause electrical noise.
  4. Rule out issues with electrolyte purity and contamination. Use high purity chemicals and solvents. Properly degas solutions.
  5. Make the electrode diameter as small as possible (1-5 μm) to minimize capacitative charging current and achieve steady-state behavior more quickly.
  6. Allow adequate equilibration time after immersing the UME and before testing.
  • asked a question related to Reasoning
Question
7 answers
One of the reviewer asked me to split results and discussion section, since I still want to continue the the article submission with combined results and discussion... please share your knowledge in this context. Thanks in advance.
Relevant answer
Answer
Dear Abdul Rehman,
some journals prefer to split the two sections, which can be sometimes hard for the writer. When you split, it is expected that in the "results" section you just illustrate data in a plain way without commenting about their implications, while in the "discussion" section you can explain and speculate about your findings. In some cases, results and discussion are so interlaced that it is hard to separate the sections. If the journal admits a unique section you can try to justify your choice to the referee, provided that you have strong and understandable motivation to do so. Good luck!
  • asked a question related to Reasoning
Question
2 answers
WHEN WE DONE THE ELEMENTAL ANALYSIS OF THE BLACK SPOT AREA THE CHROMIUM CONCENTRATION INCREASES. CAN ANYONE PLEASE TELL ME WHAT IS THE REASON BEHIEND THE INCREASE IN THE CHROMIUM CONCENTRATION PARTICULARLY AT THE BLACK SPOTS.
Relevant answer
Answer
Уважаемый господин,
Спасибо за ваш ответ.
Не могли бы вы вкратце объяснить причину окисления и то, как это окисление приводит к увеличению концентрации хрома с фактических 12 % до 25 % в этом черном пятне.
  • asked a question related to Reasoning
Question
2 answers
As shown in the photo, the curved mandrel must be immersed in liquid TPU through a dip molding process.
However, the outside of the bent part will always be thinly coated.
Can you tell me the reason for this and how to solve it?
TPU has a high viscosity of 30,000 cps.
Thanks you.
Relevant answer
Answer
Optimize Dipping Parameters: Adjusting the withdrawal speed and angle during the dipping process can help to achieve uniform thickness on curved surfaces. It is important to carefully control the withdrawal speed to ensure that the coating material adheres evenly to the curved surface.
  • asked a question related to Reasoning
Question
2 answers
Why do we need systematic review of reasons? Are there any differences between systematic review and meta-analysis?
Relevant answer
Answer
A systematic review is a broader process that involves systematically collecting, appraising, and summarizing all available evidence, while a meta-analysis is a specific statistical technique used within a systematic review to quantitatively synthesize data from multiple studies to produce a more precise estimate of the treatment effect. Often, these two methodologies are used together to provide a comprehensive overview of the existing literature on a particular topic.
  • asked a question related to Reasoning
Question
2 answers
I would like to get the enthalpy as a function of temperature for BCC lithium at zero pressure.
I have performed a series of NVT simulations with 500 atoms using a Nose-Hoover thermostat at the corresponding equilibrium volumes (found using the volume average of NPT simulations) and calculated the enthalpy as 𝐻=𝑈+𝑝𝑉 which at zero pressure is just the total energy in the simulation. When I compare the result with experimental values from NIST referenced to the enthalpy at 0K, the enthalpy I get is significantly higher.
Things I've thought about:
  1. It is not an offset so it's not like a constant contribution like zero point energy is missing and besides the referencing should fix that.
  2. It is not a constant factor difference either and I think my units are fine.
  3. The pressure is indeed 0 and fluctuates by about 0.005GPa which is tiny i.e. pV term fluctuation is less than 1meV/atom
  4. The simulation is stable, it remains BCC the entire team as seen from Common Neighbor Analysis and the eye test.
My questions are:
  1. Am I thinking about this wrong? Is there some reason why this is not a valid simulation protocol for getting the enthalpy of a solid? Perhaps a classical simulation near 0K is not valid since quantum effects dominate?
  2. Am I missing some term? It would have to be a decreasing function of temperature and any other contribution such as electronic enthalpy (from integrating electronic heat capacity) would make it worse by increasing the enthalpy
  3. Is there a paper where someone has computed the enthalpy as a function of pressure of a solid using MD/DFT, ideally near 0K?
Relevant answer
Answer
Just from the nice pic, I think the data different between MD and exp. is fairly closed from my point.
A small tip is that in exp. materials often consist of defects, vacancies, dislocation, grain boundaries and so on but in MD the materials is a perfect crystal in your case.
So I think you can accept this result.
And if you're not satisfied, I would like to propose two aspects which you can do furhter tests.
1. increase the number of atoms in the system. A big system would result in better results( or not).
2. chose a "good" potential. A good potential is critical to the properties of systems. Before you do further production run, a systematical tets of exsists potentials is essential.
  • asked a question related to Reasoning
Question
1 answer
Hi,
I had stained CD40 antibody on tonsil paraffin sections. But staining came positive only on periphery of tissue and there is no staining in the cente of tissue. What can be the reason??
Relevant answer
Answer
It might be due to insufficient fixation in the centre of the sample.
  • asked a question related to Reasoning
Question
1 answer
I want to use electrospinning to prepare sebs fibers, but when using THF to dissolve sebs1651 (14wt%), solidification occurs. What is the reason for this?
Relevant answer
Answer
Increase the amount of solvent or try some other solvent. For electrospinning, we have to maintain the viscosity of the solution. So, use different amounts of solvent and figure out which one is suitable.
  • asked a question related to Reasoning
Question
9 answers
What is the mental state of women who left their job due to some or the other family reasons. After a couple of year they try to come back to job, so what are the issues faced by them.
Relevant answer
Answer
From my perspective, women who voluntarily resign from their occupations encounter an absence of options. Instead, individuals solely anticipate tangible circumstances, such as a lack of income, and prioritize their families. However, they will experience a decline in their self-esteem and a sense of meaninglessness in life when they are confronted with the fact that their children have reached an age where they can be self-reliant, engage with others, and attend school regularly. Women will encounter heightened financial requirements and come to recognize that their happiness is crucial for achieving equilibrium within the family. Nevertheless, they retained the sentiment that it was belated and unfeasible to resume their professional duties. While some women opt to reenter the workforce once their children become self-sufficient, they often experience a phase characterized by feelings of guilt and inadequacy as mothers.
  • asked a question related to Reasoning
Question
1 answer
An interaction between my viral protein of interest and a host protein is observed in HEK 293T cells but not in HFF1 cells.
Relevant answer
Answer
There could be many reasons. But, we need more information pertaining to your specific experiment.
Is the host protein of interest endogenous or exogenous? Presumably the viral protein is expressed exogenously using transfection or viral transduction. If both proteins are being supplied exogenously, you may be running into an expression issue in HFF1 cells. Be sure that both exogenous proteins are being expressed in the HFF1 cells. If the host protein is endogenous, the HFF1 cells may simply not express it to a high level naturally. You can measure expression of the proteins using western blot of your input lysates. If you are not getting good expression of exogenous constructs in HFF1, then you may turn to alternative methods of supplying the trans-gene (like viral transduction).
HEK-293-T have a high transfection efficiency, so you may be getting more of both proteins expressed and are therefore able to see an interaction. Generally, an interaction in HEK-293-T cells is considered biologically relevant. That being said, these cells lack many components that other cell lines have.
Aside from this technical consideration, there are many other reasons an interaction may occur in one cell line and not another. Different cell lines will have different global protein expression as well as post-translational modifications (PTMs). So, if your host protein is endogenous, check baseline levels in both cell lines. Also check for differences in the banding pattern between cell lines to spot PTMs. For instance, the endogenous host protein may be cleaved or phosphorylated in HFF1.
As an additional consideration, cell lines may respond differently to the method of trans-gene delivery. For instance, the HFF1 cells may have more intact innate immune signaling pathways than the HEK-293-T cells, including ones for foreign DNA sensing. When you supply the trans-gene using transfection, the HFF1 cell components are able to identify the foreign DNA as a threat and carry out an inflammatory signaling pathway. This type of signaling can alter host proteins (via PTMs or gene expression/repression) and may lead to loss of a protein-protein interaction in that cell type.
  • asked a question related to Reasoning
Question
1 answer
I did EIS measurement for my electrocatalyst in both open circuit potential and at fixed potential of 1.5 V vs RHE. The Rct for OCP came much lower than the fixed potential. What could be the reason?
Relevant answer
Answer
did you find any reason for your obtained result?
  • asked a question related to Reasoning
Question
6 answers
Dear all,
I got particle size from AFM , it is more than 140nm but for same sample i got large surface more than 170 m2 /g with pore volume 0.06 ml/g by use BET surface area.
Is it possible?
Best regards
Relevant answer
Answer
Riyadh Abdullah Depends on your material. I thought I'd seen earlier that you were referring to TiO2 (which has a density of ~ 4.2 m2/g). In this case your SSA translates to ~ 40.5 m2/cm3. For unit density, particles of 100 nm have a SSA of 60 m2/cm3. So, your converted D[3,2] or Sauter Mean Diameter is ~ 150 nm. So, perfectly feasible, IMHO. Indeed, the (excellent) agreement is pretty suspicious!
  • asked a question related to Reasoning
Question
6 answers
Dear colleagues,
I have encountered a statistical issue involving the analysis of Likert scale data, specifically using the POSAS scare scale, in a paired situation. The scale comprises numerous questions, and I am considering two approaches for comparison: either evaluating each item separately between two groups or aggregating the values to obtain a final score for subsequent comparison.
However, a challenge arises when I choose to add up the scores. In such cases, obtaining significant results does not provide clarity on where the differences originated. Conversely, if I opt to compare each item individually, the number of tests would escalate to nearly 80, making it sensible to consider Bonferroni correction. Nevertheless, dividing the significance threshold (0.05) by 80 renders many scores non-significant.
I am contemplating whether it is reasonable to adjust with Bonferroni correction by dividing by the total number of tests. Alternatively, I am considering adjusting for each Likert item separately. For instance, in a two-time point comparison, dividing by 2 for each Likert item seems more appropriate than aggregating all Likert scale comparisons, which would result in a division by 70-80 tests.
I would greatly appreciate your assistance and insights on this matter.
Relevant answer
Answer
Thanks a million for your help.
  • asked a question related to Reasoning
Question
3 answers
I found that practically but don't know the reason.
Relevant answer
Answer
The metabolism of fish, as in other species, is directly related to its temperature. They are poikilothermic organisms (they do not regulate the internal temperature of the body), they depend on the ambient temperature, if it is low their metabolism and growth are also low.
  • asked a question related to Reasoning
Question
3 answers
A seemingly obvious explanation for redshift in light that has traveled distances on the order of hundreds of millions or billions of light-years would be a slight loss of energy resulting in redshift. The loss could come, for example, from some quantum event resulting in very occasional loss of a photon, or from some other phenomena. Given the ubiquitous appearance of entropy in nature, there seems no reason to expect light would propagate over extreme distances without some degrading.
If entropy were the cause of redshift it would be expected that the amount of redshift would be based on the distance traveled. Since this conforms with the detected data, it seems this explanation, whatever the objections may be, is a reasonable candidate as the cause of detected redshift. At first glance, this would seem a better candidate than a conjectured but undetected new entity such as “dark energy”.
This is such an obvious explanation for redshift, I am assuming that it has been advanced before, likely numerous times. I am interested in getting up to speed on arguments for or against entropy as an explanation for redshift.
Relevant answer
Answer
Harry
Thanks for taking time to put a well-reasoned answer up. I agree that the doppler effect is the generally accepted explanation for cosmological redshift.
The problem with invoking the doppler effect to explain redshift is that it has led to the expanding universe model, with the need to explain the presumed acceleration of the expansion. This has led to the conjecture of the existence of “dark energy” which is supposed to be gravitationally repulsive.
The only empirical evidence I have seen for dark energy is the cosmological redshift. Being a non-specialist, that seems a pretty radical and improbable explanation for something that could possibly be explained by other phenomena.
For example, Halton Arp in his paper Intrinsic Redshifts in Quasars and Galaxies (https://www.haltonarp.com/articles/intrinsic_redshifts_in_quasars_and_galaxies.pdf) argued:
--
There are three kinds of evidence for the existence of non-velocity redshifts:
1) Statistical association of objects having much different redshifts. (See e.g. analysis by C. Fulton (Fulton & Arp 2009) of the 2dF deep field, and Burbidge & Napier (2009).)
2) Interaction between objects of different redshift (including high redshifts aligned across active galaxy nuclei).
3) Quasars occurring preferentially at certain specific redshifts.
The latter precludes redshifts caused by recessional velocity because it would require matter distributed in shells and receding velocities in discrete steps.
--
My thinking is that if cosmological redshift is going to be explained by imagining exotic phenomena like dark energy, perhaps it is worthwhile to consider other unproven phenomena that is more in line with phenomena we can observe.
Entropy is universally observed. Light is an anomaly in that over the distances and time we typically can measure, we see no evidence of entropy in the propagation of light. However, cosmological redshift could be considered evidence for the entropy of light, as there is redshift detectable in light that we assumed has traveled hundreds of millions or perhaps billions of light years.
I realize this is not conventional thinking. But presenting the hypothesis of light entropy seems a more reasonable jump of known phenomena to explain a mystery, than to invoke something complete radical to our experience, such as “repulsive gravity”. And dark energy is only the tip of the iceberg of, in that phenomena more exotic than dark energy --such as quantum fluctuations birthing bubble universes, cosmic inflation, and so on—have been imagined to explain the initiation of an expanding universe.
Entropy of light seems like an obvious, and less exotic candidate, to explain cosmic redshift. I am hoping to find the trail left by any who may have traveled this way before.
  • asked a question related to Reasoning
Question
1 answer
I am working on CAT25 microsatellite biomarker. The melt curve shows two peaks while single band appears on the gel. Primers are taken from the published research papers. The DNA is of healthy individual. The umelt/quartz derivative plot also shows two peaks. The results of HRM and gel are attached. What could be the most likely reason of two peaks?
Relevant answer
Answer
A single PCR product can have multiple melting transitions that equate to peaks. The PCR product will appear singular on a gel but the product isn't fully denaturing at a single temperature. Often, one region denatures at a lower temperature (AT rich) while the other half (perhaps GC rich) of the amplicon denatures at a higher temperature. It is a more complex melting behavior due to differences in nucleotide content that are creating more stable and less stable regions of your amplicon that are different enough in Tm to create multiple peaks. Try the melt map version (https://www.dna-utah.org/umelt/quartz/map.php) and it will help you identify the regions and how different they actually are in terms of stability.