Science topic
Sectioning - Science topic
Explore the latest questions and answers in Sectioning, and find Sectioning experts.
Questions related to Sectioning
How to Write the Methods Section of a Research Paper
- A. Result
- B. Methodology
- C. Discussion
- D. Introduction
Hello to every one. I intend to draw a plot in origin pro, and its y-axis be something like the attached photo. in this regard, i want my y-axis divied into two section. the the section one values on figure be 0, 20,60, 100, and the section two values be 1000 and 10000. could anyone help me regarding this issue?
#rhf/3-21g pop=nboread
RHF/3-21G for formamide (H2NCHO)
0 1
H -1.908544 0.420906 0.000111
H -1.188060 -1.161135 0.000063
N -1.084526 -0.157315 0.000032
C 0.163001 0.386691 -0.000154
O 1.196265 -0.246372 0.000051
H 0.140159 1.492269 0.000126
$nbo nrt $end
I tried this example, but using gaussian the section with resonance weight is missing
\documentclass[legalpaper, 12pt, addpoints]{exam}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{tcolorbox}
\usepackage[paper=a4paper]{geometry}\usepackage[utf8x]{inputenc}
\usepackage{ucs}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{makeidx}
\usepackage{graphicx}
\usepackage{tabularx}
\usepackage{hyperref}
\usepackage{times}
\usepackage{color}
\usepackage{etoolbox}
\usepackage{subfig}
\usepackage{multicol}
\usepackage{fancyhdr}
%\usepackage{natbib}
\patchcmd{\abstract}{\null\vfil}{}{}{}
\newtheorem{theorem}{Theorem}[section]
%\newtheorem{defn}{Definition}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{proposition}{Proposition}[section]
%\theoremstyle{definition}
\usepackage{draftwatermark}
\SetWatermarkLightness{ 0.96}
\SetWatermarkText{Walle\&Mitiku}
\SetWatermarkScale{1.25}
\newtheorem{defn}{Definition} [section]
\numberwithin{equation}{section}
%\numcoverpages
\firstpageheadrule{}{}{}
\rhead{Mathematics for Natural Sciences}
\chead{}
\lhead{Mizan Tepi Universty}
\runningheadrule
\firstpagefootrule{}{}{}
\lfoot{Math1011 }
\cfoot{ \thepage}
\rfoot{Final Exam~~~\today}
\runningfootrule
\setcounter{MaxMatrixCols}{30}
\linespread{1.5}
\topmargin -0.1in
\setlength{\topmargin}{-10pt}
\setlength{\oddsidemargin}{-2 pt}
\oddsidemargin -20pt
\evensidemargin -1pt
\headheight -0.2in
\headsep 0.3in
\rightmargin -2pt
\leftmargin -2pt
\leftmargin 0in
\textheight 9.2in
\textwidth 6.7in
\parindent 0in
\parskip 0in
\author{walle Tilahun}
\pagenumbering{arabic}
%\usepackage{background}
%\backgroundsetup{contents=\includegraphics{Capture}, scale=, opacity=0.35}
\usepackage{multicol}
\usepackage{color}
\usepackage{tikz}
\CorrectChoiceEmphasis{\itseries\color{red}}
\begin{document}
\begin{center}
\begin{coverpages}
\begin{center}
\begin{center}
\includegraphics[width=0.25\linewidth]{mtu}
\end{center}
\textbf{MIZAN TEPI UNIVERSITY\\}
\textbf{COLLEGE OF NATURAL AND COMPUTATIONAL SCIENCE\\ DEPARTMENT OF MATHEMATICS\\Mathematics for Natural Sciences(Math1011) \\Final Exam for First Year Regular Students.~~~~Total mark:~~~~40\%~~~~~ \today}
\end{center}
\vspace{5mm}
\makebox[0.95\textwidth]{Name:\enspace\hrulefill section:\enspace\hrulefill$Id~N_o$~~~ /----/~~~~Time ~~3:00hr}
%\end{center} \newpage
\extraheadheight{-0.8in}
\parbox{6in}{{\textbf{General Instructions:}\\ The exam consisting of 22 questions having 4 True/ false ,4 short answer,8 multiple choice,and 6 workout items within 4 pages.
}
\vspace{0.15in}}
\fbox{\fbox{\parbox{7in}{
\begin{itemize}
\item Before you are starting to do please fill all the above informaton correctly.
\item Write your answer on the ansewrsheet only,and don't mark on the question paper.
\item Students should not to bring any electronic materials(mobiles,laptops.\dots) to exam hall.
\item Cheating in the examination in any form is a serious offense.
\end{itemize} }}}
\end{coverpages}
\end{center}
\parbox{6in}{\textsc{\textbf{Part I.} Write True if the statement is correct or write False if the statement is incorrect on the answersheet space provided(1 pts each) .}}
\vspace{0.1in}
\hrule
\vspace{0.1in}
\begin{questions}
\question Composition functions are always commutative, meaning \( f(g(x)) = g(f(x)) .\)
%\question The hyperbolic function $\cosh(x) $ is always positive for all real number $x$.
\question The graph of $y=sin(x)$ is an increasing function on $[\frac{3\pi}{4},\frac{5\pi}{4}]$
\question Every one to one functions has an inverse.
\question The simplified form of $\log_{5}^{(\log_{3}243)} =1$
\\\parbox{6in}{\textsc{\textbf{Part II.} Fill in the following blank spaces with approprate the most simplified answer for each questions (1pts for each).}}
\vspace{0.1in}
\hrule
\vspace{0.1in}
\question If \( f(x) = x^2 \) and \( g(x) = \sqrt{x} \), then \( (f \circ g)(4) \)=.
\question The domain of the function $f(x)=\sqrt{4-x^2}$ is=\dots\dots
%\question If $g(x)=2x+1$ and \( (f \circ g)(x) \)=$12x^2-2x-9$,then the function $f(x)$=
\question The value of $tanh(\ln3)$=\dots\dots
\question Suppose $f(x)=9-3^x$ ,then $f^{-1}(x)$=\dots\dots
\\\parbox{6in}{\textsc{\textbf{Part III.} Read the following questions carefully,then choose the correct answer from the suggested options and write only the letter of your choice on the separate answer sheet provided(1.5pts each).}}
\vspace{0.1in}
\hrule
\vspace{0.1in}
%\begin{questions}
\question Consider $f(x)=\log_{5}^{(x+4)}$, ~then which of the following is~\underline{Not true} about the graph of $f(x)$?
\begin{choices}
\CorrectChoice The y-axis is a vertical asymptote of the graph of $f(x) $
\choice The function increases as the value of x increases.
\choice the domain of $f(x)$ is the set of all real numbers greater than $-4$
\choice The x-intercept of the graph of $f(x)$ is $-3$
\end{choices}
\question Which one of the following is \underline{true} about $f(x)=x^3+4x^2+5x+2$.
\begin{choices}
\choice $x-2$ is a factor of $f(x)$
\CorrectChoice $x+2$ is afactor of $f(x)$
\choice $x-1$ is afactor of $f(x)$
\choice $3x+1$ is afactor of $f(x)$
\end{choices}
\newpage
\question What does it mean for a function to be one-to-one?
\begin{choices}
\choice Each output value corresponds to exactly one input value
\choice There is no restriction on the number of input-output pairs
\choice The function is not defined for certain values
\CorrectChoice Each input value corresponds to exactly one output value
\end{choices}
\question Which of the following expression is true about relation $R$?
\begin{choices}
\choice Domain$((R^{-1})^{-1})$= Domain$(R^{-1})$
\CorrectChoice Domain$(R)$= Range$(R^{-1})$
\choice Range$(R)$= Range$(R^{-1})$
\choice Domain$(R)$= Domain$(R^{-1})$
\end{choices}
\question Which one of the follwing is correct about $f(x)=-3+\log_{2}(x-1)$?
\begin{choices}
\CorrectChoice x-intercept at $(1,0)$
\choice x=1 is a vertical asymptote
\choice the graph is logarthimc decay
\choice domain is the set of real number
\end{choices}
\question Suppose $f=\{(1,2),(3,5),(6,7)\}$ and $g=\{(a,1),(b,3),(c,6),(d,8)\}$, then $f\circ g$=
\begin{choices}
\choice $\{(1,2),(3,5),(6,7),(d,8)\}$
\choice $\{(a,2),(b,5),(c,7)\} $
\choice $\{(a,1),(b,7),(c,3)\}$
\CorrectChoice $\{(2,a),(5,b),(7,c),(8,d)\}$
\end{choices}
\question The solution set of $\log_{2}^{(x+2)}-\log_{2}^{(x-1)}=2$ is$\dots$
\begin{choices}
\choice $\frac{2}{35}$
\choice $\frac{35}{2}$
\choice $-2$
\CorrectChoice $2$
\end{choices}
\question Let $tan\theta=\frac{3}{4}$ and $sin\theta<0$,then the value of $cos\theta$ is\dots
\begin{choices}
\CorrectChoice $\frac{4}{5}$
\choice $\frac{3}{5}$
\choice $\frac{-4}{5}$
\choice $\frac{-3}{5}$
\end{choices}
\parbox{6in}{\textsc{\textbf{Part II.} read the following questions correctly,then show all necessary steps neatly and correctly on the answer sheet provided (20pts ).}}
\vspace{0.1in}
\hrule
\vspace{0.1in}
\question Show that $2cosh^2(x)-cosh(2x)=1$~~~$\dots\dots\dots(2pts ) $
\question Let $h(x)=4^x$,then show that $h(x+2)=16h(x)$.~~$\dots(2pts ) $
\question Find all solutions when extracting the $3rd$ root of the complex number $z = 4i$.~~$\dots\dots\dots(4pts ) $
\question Find the point $p(x_0,y_0)$ that divides the line segment $AB$ joining points $A(2,3)$ and $B(7,-2)$ in the ratio $2:3$.~~$\dots\dots\dots(4pts ) $\\
a. ~Does the point $p(x_0,y_0)$ lie on the line $L:x-5y+15=0$? ~~ \\b.~Find the distance between $p(x_0,y_0)$ and $L:x-5y+15=0$ ~~
\question Let $g:\mathbb{R}\to \mathbb{R}$ defined by $g(x)=\sqrt[3]{x+1}$, then show that $g(x)$ is a one to one correspondence. ~~$\dots(4pts ) $
\question If $f(x)=\frac{3x+4}{x^2-9}$,~then find ~~$\dots\dots\dots(4pts ) $
\begin{itemize}
\item[a] the domian of $f(x)$
\item [b] the x-intercept and y-intercept of $f(x)$
\item [c]all asymptotes of $f(x)$
\item[d] Sketch the graph of $f(x)$.
\end{itemize}
\section*{Bonus(3\%):Sketch the graph of $f(x)=2-2^{-x}$,then discuss the domain,range,intercepts,and asymptetes clearly and correctlly.}
\end{questions}
\end{document}
The impact of using double and clamp in the arch section of historical bridges on the seismic performance of bridges.
Tarihi köprülerin kemer kısmında dubel ve kelepçe kullanımının köprülerin sismik davranışına etkileri nelerdir?
Hello,
for my master thesis i need to write a section about electrochemical potential
what is it and its unflunce on the electrochemical cell.
could any one of you please give me some refrences which could help!
thanx
I am staining mouse brain tissue sections using an anti-GFAP primary antibody (for astrocytes). The results are coming out pretty weird. Some sections have decent staining, and some others have horrible/very weak staining. I'm using the same protocol and reagents as I usually do (and have had successful staining in the past), so I won't go into those details here.
Instead, this was my first time transcardially perfusing the animals with paraformaldehyde. I suspect some of the perfusions did not go well: a couple of the bodies did not get very stiff. After brain extractions, I put them in a PFA/sucrose solution overnight. Looking at my stained sections under the microscope, the clearance of blood didn't seem to be a huge issue. There is a little autofluorescence going on (due to the blood, I suspect), but overall, there is no blood in the sections.
So, could it be that the PFA didn't penetrate my tissue well? Would this cause extremely weak signal in my sections (even though the tissue did sit in PFA overnight)?
The pictures are examples.
1) The staining came out as expected on this one.
2) Verrrry weak signal, but you can see the GFAP.
3) An example of autofluorescence from the blood that was left. I see absolutely no GFAP/astrocytes.
Can we stop global climate change? Does human scientific power reach the world's climate change? How do researchers respond?
As you know, humans are very intelligent and can predict the future climate of the world with hydrology, climatology and paleontology. But don't countries, especially industrialized countries, that produce the most harmful gases in the earth's atmosphere and think about the future of the earth's atmosphere? Do they listen to the research of climatologists? What would have to happen to force them to listen to climate scientists?
Miloud Chakit added a reply
Climate change is an important and complex global challenge, and scientific theories about it are based on extensive research and evidence. The future path of the world depends on various factors including human actions, political decisions and international cooperation.
Efforts to mitigate and adapt to climate change continue. While complete reversal may be challenging, important steps can be taken to slow progression and lessen its effects. This requires global cooperation, sustainable practices and the development and implementation of clean energy technologies.
Human scientific abilities play an important role, but dealing with climate change also requires social, economic and political changes. The goal is to limit global warming and its associated impacts, and collective action at the local, national, and international levels is essential for a more sustainable future.
Reply to this discussion
Osama Bahnas added a reply
It is impossible to stop global climate change. The human scientific power can not reach the world's climate change.
Borys Kapochkin added a reply
Mathematical models of increasing planetary temperature as a function of the argument - anthropogenic influence - are erroneous.
Alastair Bain McDonald added a reply
We could stop climate change but we won't! We have the scientific knowldge but not the political will. One could blame Russia and China from refusing to cooperate but half the population of the USA (Republicans) deny climate change is a problem and prefer their profligate life styles reply:
All climate change has been loaded on the CO2 responsible for the greenhouse effect. Therefore, there must be scientific experiments from several independent scientific institutes worldwide to find out what the greenhouse impact is on various CO2 concentrations. Then, there must be a conference from a reliable, professional organization with the participation of all independent scientific institutions to establish standards on CO2 concentrations and propose political actions accordingly.
The second action that can be done is to plant as many trees and plants as possible to breathe the CO2 and free the oxygen. Stop any deforestation and plant trees immediately in any bunt areas.
Reply to this discussion
Effect of Injecting Hydrogen Peroxide into Heavy Clay Loam Soil on Plant Water Status, NET CO2 Assimilation, Biomass, and Vascular Anatomy of Avocado Trees
In Chile, avocado (Persea americana Mill.) orchards are often located in poorly drained, low-oxygen soils, situation which limits fruit production and quality. The objective of this study was to evaluate the effect of injecting soil with hydrogen peroxide (H2O2) as a source of molecular oxygen, on plant water status, net CO2 assimilation, biomass and anatomy of avocado trees set in clay loam soil with water content maintained at field capacity. Three-year-old ‘Hass’ avocado trees were planted outdoors in containers filled with heavy loam clay soil with moisture content sustained at field capacity. Plants were divided into two treatments, (a) H2O2 injected into the soil through subsurface drip irrigation and (b) soil with no H2O2 added (control). Stem and root vascular anatomical characteristics were determined for plants in each treatment in addition to physical soil characteristics, net CO2 assimilation (A), transpiration (T), stomatal conductance (gs), stem water potential (SWP), shoot and root biomass, water use efficiency (plant biomass per water applied [WUEb]). Injecting H2O2 into the soil significantly increased the biomass of the aerial portions of the plant and WUEb, but had no significant effect on measured A, T, gs, or SWP. Xylem vessel diameter and xylem/phloem ratio tended to be greater for trees in soil injected with H2O2 than for controls. The increased biomass of the aerial portions of plants in treated soil indicates that injecting H2O2 into heavy loam clay soils may be a useful management tool in poorly aerated soil.
Shade trees reduce building energy use and CO2 emissions from power plants
Urban shade trees offer significant benefits in reducing building air-conditioning demand and improving urban air quality by reducing smog. The savings associated with these benefits vary by climate region and can be up to $200 per tree. The cost of planting trees and maintaining them can vary from $10 to $500 per tree. Tree-planting programs can be designed to have lower costs so that they offer potential savings to communities that plant trees. Our calculations suggest that urban trees play a major role in sequestering CO2 and thereby delay global warming. We estimate that a tree planted in Los Angeles avoids the combustion of 18 kg of carbon annually, even though it sequesters only 4.5-11 kg (as it would if growing in a forest). In this sense, one shade tree in Los Angeles is equivalent to three to five forest trees. In a recent analysis for Baton Rouge, Sacramento, and Salt Lake City, we estimated that planting an average of four shade trees per house (each with a top view cross section of 50 m2) would lead to an annual reduction in carbon emissions from power plants of 16,000, 41,000, and 9000 t, respectively (the per-tree reduction in carbon emissions is about 10-11 kg per year). These reductions only account for the direct reduction in the net cooling- and heating-energy use of buildings. Once the impact of the community cooling is included, these savings are increased by at least 25%.
Can Moisture-Indicating Understory Plants Be Used to Predict Survivorship of Large Lodgepole Pine Trees During Severe Outbreaks of Mountain Pine Beetle?
Why do some mature lodgepole pines survive mountain pine beetle outbreaks while most are killed? Here we test the hypothesis that mature trees growing in sites with vascular plant indicators of high relative soil moisture are more likely to survive mountain pine beetle outbreaks than mature trees associated with indicators of lower relative soil moisture. Working in the Clearwater Valley of south central British Columbia, we inventoried understory plants growing near large-diameter and small-diameter survivors and nonsurvivors of a mountain pine beetle outbreak in the mid-2000s. When key understory species were ranked according to their accepted soil moisture indicator value, a significant positive correlation was found between survivorship in large-diameter pine and inferred relative high soil moisture status—a finding consistent with the well-documented importance of soil moisture in the mobilization of defense compounds in lodgepole pine. We suggest that indicators of soil moisture may be useful in predicting the survival of large pine trees in future pine beetle outbreaks. Study Implications: A recent outbreak of the mountain pine beetle resulted in unprecedented levels of lodgepole pine mortality across southern inland British Columbia. Here, we use moisture-dependent understory plants to show that large lodgepole pine trees growing in sites with high relative moisture are more likely than similar trees in drier sites to survive severe outbreaks of mountain pine beetle—a finding that may be related to a superior ability to mobilize chemical defense compounds compared with drought-stressed trees.
Can Functional Traits Explain Plant Coexistence? A Case Study with Tropical Lianas and Trees
Organisms are adapted to their environment through a suite of anatomical, morphological, and physiological traits. These functional traits are commonly thought to determine an organism’s tolerance to environmental conditions. However, the differences in functional traits among co-occurring species, and whether trait differences mediate competition and coexistence is still poorly understood. Here we review studies comparing functional traits in two co-occurring tropical woody plant guilds, lianas and trees, to understand whether competing plant guilds differ in functional traits and how these differences may help to explain tropical woody plant coexistence. We examined 36 separate studies that compared a total of 140 different functional traits of co-occurring lianas and trees. We conducted a meta-analysis for ten of these functional traits, those that were present in at least five studies. We found that the mean trait value between lianas and trees differed significantly in four of the ten functional traits. Lianas differed from trees mainly in functional traits related to a faster resource acquisition life history strategy. However, the lack of difference in the remaining six functional traits indicates that lianas are not restricted to the fast end of the plant life–history continuum. Differences in functional traits between lianas and trees suggest these plant guilds may coexist in tropical forests by specializing in different life–history strategies, but there is still a significant overlap in the life–history strategies between these two competing guilds.
The use of operator action event trees to improve plant-specific emergency operating procedures
Even with plant standardization and generic emergency procedure guidelines (EPGs), there are sufficient dissimilarities in nuclear power plants that implementation of the guidelines at each plant must be performed in a manner that ensures consideration of plant-specific design features and operating characteristics. The use of operator action event tress (OAETs) results in identification of key features unique to each plant and yields insights into accident prevention and mitigation that can be factored into plant-specific emergency procedures. Operator action event trees were developed as a logical extension of the event trees developed during probabilistic risk analyses. The dominant accident sequences developed from a plant-specific probabilistic risk assessment represent the utility's best understanding of the most likely combination of events that must occur to create a situation in which core cooling is threatened or significant releases occur. It is desirable that emergency operating procedures (EOPs) provide adequate guidance leading to appropriate operator actions for these sequences. The OAETs provide a structured approach for assuring that the EOPs address these situations.
Plant and Wood Area Index of Solitary Trees for Urban Contexts in Nordic Cities
Background: We present the plant area index (PAI) measurements taken for 63 deciduous broadleaved tree species and 1 deciduous conifer tree species suitable for urban areas in Nordic cities. The aim was to evaluate PAI and wood area index (WAI) of solitary-grown broadleaved tree species and cultivars of the same age in order to present a data resource of individual tree characteristics viewed in summer (PAI) and in winter (WAI). Methods: All trees were planted as individuals in 2001 at the Hørsholm Arboretum in Denmark. The field method included a Digital Plant Canopy Imager where each scan and contrast values were set to consistent values. Results: The results illustrate that solitary trees differ widely in their WAI and PAI and reflect the integrated effects of leaf material and the woody component of tree crowns. The indications also show highly significant (P < 0.001) differences between species and genotypes. The WAI had an overall mean of 0.91 (± 0.03), ranging from Tilia platyphyllos ‘Orebro’ with a WAI of 0.32 (± 0.04) to Carpinus betulus ‘Fastigiata’ with a WAI of 1.94 (± 0.09). The lowest mean PAI in the dataset was Fraxinus angustifolia ‘Raywood’ with a PAI of 1.93 (± 0.05), whereas Acer campestre ‘Kuglennar’ represents the cultivar with the largest PAI of 8.15 (± 0.14). Conclusions: Understanding how this variation in crown architectural structure changes over the year can be applied to climate responsive design and microclimate modeling where plant and wood area index of solitary-grown trees in urban contexts are of interest.
Do Exotic Trees Threaten Southern Arid Areas of Tunisia? A Case Study Indian Journal of Ecology (2020) 00(0): 000-000 Plant-plant interactions
an afforested steppe planted This study was conducted in with aims to compare the effects of exotic and native Stipa tenacissima trees (and , respectively) on the understory vegetation and soil properties. For each tree species, two sub-Acacia salicina Pinus halepensis habitats were distinguished: the canopied sub-habitat (under the tree crown) and the un-canopied sub-habitat (open grassland). Soil moisture was measured in both sub-habitats at 10 cm depth. In parallel to soil moisture, investigated the effect of tree species on soil fertility. Soil samples were collected from the upper 10 cm soil, excluding litter and stones. The nutrient status of soil (organic matter, total N, extractable P) was significantly higher under compared to and open areas. This tendency remained constant with the soil water A. salicina P. halepensis content which was significantly higher under trees compared to open sub-habitats. For water content, there were no significant differences between studied trees. Total plant cover, species richness and the density of perennial species were significantly higher under the exotic species compared to other sub-habitats. Among the two tree species, had the strongest positive effect on the understory Acacia salicina vegetation. It seems to be more useful as a restoration tool in arid areas and more suitable to create islands of resources and foster succession than the other investigated tree species.
Effects of Elevated Atmospheric CO2 on Microbial Community Structure at the Plant-Soil Interface of Young Beech Trees (Fagus sylvatica L.) Grown at Two Sites with Contrasting Climatic Conditions
Soil microbial community responses to elevated atmospheric CO2 concentrations (eCO2) occur mainly indirectly via CO2-induced plant growth stimulation leading to quantitative as well as qualitative changes in rhizodeposition and plant litter. In order to gain insight into short-term, site-specific effects of eCO2 on the microbial community structure at the plant-soil interface, young beech trees (Fagus sylvatica L.) from two opposing mountainous slopes with contrasting climatic conditions were incubated under ambient (360 ppm) CO2 concentrations in a greenhouse. One week before harvest, half of the trees were incubated for 2 days under eCO2 (1,100 ppm) conditions. Shifts in the microbial community structure in the adhering soil as well as in the root rhizosphere complex (RRC) were investigated via TRFLP and 454 pyrosequencing based on 16S ribosomal RNA (rRNA) genes. Multivariate analysis of the community profiles showed clear changes of microbial community structure between plants grown under ambient and elevated CO2 mainly in RRC. Both TRFLP and 454 pyrosequencing showed a significant decrease in the microbial diversity and evenness as a response of CO2 enrichment. While Alphaproteobacteria dominated by Rhizobiales decreased at eCO2, Betaproteobacteria, mainly Burkholderiales, remained unaffected. In contrast, Gammaproteobacteria and Deltaproteobacteria, predominated by Pseudomonadales and Myxococcales, respectively, increased at eCO2. Members of the order Actinomycetales increased, whereas within the phylum Acidobacteria subgroup Gp1 decreased, and the subgroups Gp4 and Gp6 increased under atmospheric CO2 enrichment. Moreover, Planctomycetes and Firmicutes, mainly members of Bacilli, increased under eCO2. Overall, the effect intensity of eCO2 on soil microbial communities was dependent on the distance to the roots. This effect was consistent for all trees under investigation; a site-specific effect of eCO2 in response to the origin of the trees was not observed.
Depending on different journal articles and some with method with clear designs and some without method section and still called as journal articles ( can be derived a lot from reputed publisher), this makes me perplexed with what the scientific articles or journal in social science ( not natural science). Professors , scholars and researchers help to clarify , thanks!
I have a question about some weird nuclei I see while imaging. I'm sorry if this isn't the best place to write it!
One image has bad, "shattered"-looking nuclei from dentate gyrus.
The other image has healthy nuclei from CA1.
I end up in some experiments with really terrible looking DAPI with "shattered" nuclei. Sections with this type of DAPI are highly correlated with really bad FISH staining as well. It seems like most of the tissue integrity is lost in general.
I've seen this before, but not enough times to know what step of tissue collection and processing could cause it. I don't *think* this is purely a cryosectioning issue.
Does anyone have any guesses for what could cause this issue? I'm guessing I can't be the first person to run into this!
Tissue processing
I am imaging coronal brain sections from mouse tissue that is flash frozen immediately after dissection. In this case this is imaging dentate gyrus, where nuclei should be exceptionally dense. 20 um cryosections are immediately placed onto Superfrost glass slides and stored at -70 (in this case for ~2 weeks).
Sections are fixed with 4% PFA on the slide and then dehydrated with serial EtOH incubations of 50%, 70%, and 100% for 5 minutes each. Then sections go through the RNAscope smFISH protocol, at the end of which they are stained with DAPI for 30 seconds.
I have deposited chrome on the glass wafer. Then I applied a dielectric layer of SU-8 over it. After that to make the surface hydrophobic, I applied Teflon by using spin coating. However, I found that the hydrophobicity is not the same in every section of the wafer. I will be extremely thankful to you if you share your idea of improving surface hydrophobicity.
Hi Everybody,
I have a presentation regarding observational study, I have divided like:
Descriptive Study: Analytical study
case report/case series cohort study
cross sectional study case control
Ecological study cross sectional
my question is I am not sure for ecological study is it analytic or descriptive?
which is the best place for that? descriptive or analytic?
Thank you!
Frozen sectioning via microtome, sections 40 um thick.
I am wondering if it looks like the sections are thinner in those ventral, whiter spots (perhaps from the tissue warming too much in that spot?)--or, perhaps, a dehydration artifact, if perhaps these sections were sticking out a bit from the cryo solution in the tube or during washes. These two sections are 10 sections (400 microns) apart from each other. Or a freezing artifact -- but other sections look fine.
I notice the very edges of sections similarly turn opaque white sometimes as they dry. Maybe these points are slightly lifted off the slide?
Thank you for your help!
What is a super vacuum? Is the earth in a vacuum? And what is dark energy?
It has not been proven until today and nature has always applied and proven exceptions and violations in the accepted theories many times in the past. That these were merely human formalisms and experimental artifacts and exploiting the limits of technology, and physical limits and laws are constantly being broken and bent in nature. Hereby we will attempt to show theoretically why and how there is and experimentally evidence in our universe of vacuum space, either in its theoretically idealized absolute form, thus free space or the partial vacuum that characterizes the vacuum of QED or QCD. And its zero-point energy and oscillations may actually be the greatest proof in nature for super energy.
It is possible without violating causation. that the apparent effect of "nothing" of vacuum space may be evidence for it
superluminocity and all this time it was hidden right in front of us. We are here trying to answer a fundamental question of physics, why the vacuum is basically space to us looks like nothing on the assumption that "nothing" exists in nature, and why a hypothetical superluminous vibration, a particle the size of Planck creates apparent nothingness in our spacetime. The novelty of the research here infers that free space is dark energy and that superluminous energy.
Stam Nicolis added a reply:
(1) Depends what is meant by ``super vacuum''. The words must, first, be defined, before questions can be asked. As it stands, it doesn't mean anything.
(2) To a good approximation the earth is moving around the Sun in a vacuum, i.e. its motion can be described by Newtonian mechanics, where the only bodies are the Earth and the Sun and the force between them is Newton's force of gravitation.
(3) Dark energy is the property of space and time that describes the fact that the Universe isn't, simply, expanding, but that this expansion is accelerating. To detect its effects it's necessary to measure the motion of bodies outside our galaxy.
To understand all this it's necessary to study classical mechanics-that leads to understanding the answer to the second question-and general relativity-in order to understand the answer to the third
László Attila Horváth added a reply:
Dear Abbas Kashani ,
The graviton - which creates or capture elementary X-rays and gamma rays- , by itself, it can be considered almost like a super vacuum.
Sergey Shevchenko added a reply:
What are rather numerous, and really strange, “vacuums” in mainstream physics, and what are two real vacuums is explained in the Shevchenko-Tokarevsky’s Planck scale informational physical model , 3 main papers are
https://www.researchgate.net/publication/367397025_The_Informational_Physical_Model_and_Fundamental_Problems_in_Physicssection 6. “Mediation of the fundamental forces in complex systems”
The first vacuum is the Matter’s fundamentally absolute, fundamentally flat, fundamentally continuous, and fundamentally “Cartesian”, (at least) [4+4+1]4D spacetime with metrics (at least) (cτ,X,Y,Z, g,w,e,s,ct), which is the actualization of the Logos set elements “Space” and “Time” [what are “Logos” set, “Space” and “Time” see first pages in 1-st or 2-nd links] at creation and existence of a concrete informational system “Matter”,
- i.e. this vacuum is a logical possibility for/of Matter’s existence and evolving, and so is by definition nothing else than some fundamentally “empty container” , i.e. is “real/absolute” vacuum.
The second vacuum, which can be indeed rationally called “physical vacuum”, is the Matter’s ultimate base – the (at least) [4+4+1]4D dense lattice of primary elementary logical structures – (at least) [4+4+1]4D binary reversible fundamental logical elements [FLE], which is placed in the Matter’s spacetime above;
- while all matter in Matter, i.e. all particles, fields, stars, galaxies, etc., are only disturbances in the lattice, that were/are created at impacts on some the lattice’s FLE. At that it looks as rather rational scientifically to assume, that such vacuum really existed – that was the initial version of the lattice that was created/formed at the “inflation epoch”, more see the SS&VT initial cosmological model in section “Cosmology” in 2-nd link.
After this initial lattice version was created, in the lattice a huge portion of energy was pumped uniformly globally [and non-uniformly locally], what resulted in Matter’s “matter” creation, which we observe now.
Since all disturbances always and constantly move in the lattice with 4D speeds of light, now can be only some “local physical vacuums”, etc.;
- though that is really quite inessential – the notion “physical vacuum” is completely useless and even wrong, since the really scientifically defined FLE lattice is completely enough at description n and analysis of everything that exists and happens in Matter. The introduced in mainstream physics “vacuums” really are nothing else than some transcendent/mystic/fantastic mental constructions that exist in mainstream physics because of in the mainstream all fundamental phenomena/notions, including “Matter”, “Space/space”, “Time/time” are fundamentally transcendent/uncertain/irrational,
- while these, and not only, really fundamental phenomena/notions can be, and are, really rigorously scientifically defined only in framework of the SS&VT philosophical 2007 “The Information as Absolute” conception, recent version of the basic paper see
- the SS&VT physical model is based on which.
More see the links above, a couple of SS posts in
https://www.researchgate.net/post/What_is_the_concept_of_quantized_vacuum_And_what_is_the_role_of_gravity_in_nature_And_what_is_the_relationship_between_dark_energy_and_quantum_gravi are relevant in this case also.
Abderrahman el Boukili added a reply:
Super vacuum, in my view, is just the vacuum itself, that is, the channel through which the universe of particles and anti-particles intersects.
Courtney Seligman added a reply:
For all practical purposes, the Earth is moving through a vacuum as it orbits the Sun, as there is so little of anything in any given place that only the most sensitive instruments could tell that there was anything there. But there are microscopic pieces of stuff that used to be inside asteroids or comets, and pieces of atoms blown out of the Sun as the Solar Wind, and cosmic rays that manage to get through the Sun's "heliosphere" and run into anything that happens to be in their way. So though the essentially empty space around the Earth would qualify as a vacuum by any historical standard, it isn't an absolutely perfect vacuum. And I suppose a "super vacuum" would be a region where there isn't anything at all, including not only matter, but also any form of energy (which has a mass equivalence of sorts, per Einstein's Special Theory of Relativity); and if so, then "super vacuums" do not exist.
Harri Shore added a reply:
The concepts you're exploring—super vacuum, dark energy, and the nature of the vacuum in quantum electrodynamics (QED) and quantum chromodynamics (QCD)—touch on some of the most profound and speculative areas in modern physics. Let's break down these concepts to provide clarity and context for your inquiry.
Super Vacuum
The term "super vacuum" is not widely used in mainstream physics literature but could be interpreted to mean an idealized vacuum state that is more "empty" than what is typically considered achievable, even beyond the vacuum state described by quantum field theories. In standard quantum field theories, a vacuum is not truly empty but seethes with virtual particles and fluctuates due to quantum uncertainties, known as zero-point energy.
Is the Earth in a Vacuum?
The Earth is not in a vacuum but is surrounded by its atmosphere, a thin layer of gases that envelops the planet. However, outer space, which begins just beyond the Earth's atmosphere, is often described as a vacuum. This is because outer space contains far fewer particles than the Earth's atmosphere, making it a near-vacuum by comparison. It's important to note that even the vacuum of outer space is not completely empty but contains low densities of particles, electromagnetic fields, and cosmic radiation.
Dark Energy
Dark energy is a hypothetical form of energy that permeates all of space and tends to accelerate the expansion of the universe. It is one of the greatest mysteries in modern cosmology, making up about 68% of the universe's total energy content according to current observations. The exact nature of dark energy is still unknown, but it is thought to be responsible for the observed acceleration in the expansion rate of the universe since its discovery in the late 1990s through observations of distant supernovae.
Vacuum Energy and Superluminosity
Vacuum energy refers to the energy that exists in space due to fluctuations of the quantum fields, even in the absence of any particles or radiation. It is a manifestation of the Heisenberg uncertainty principle in quantum mechanics, which allows for the temporary creation of particle-antiparticle pairs from "nothing."
The concept of superluminosity or superluminal phenomena (faster-than-light phenomena) is speculative and not supported by current mainstream physics, as it would violate the principle of causality, a cornerstone of the theory of relativity. However, there have been theoretical explorations of conditions under which apparent superluminal effects could occur without violating causality, such as in the context of quantum tunneling or warp drives in general relativity.
Vacuum Space as Evidence of Superluminous Energy
Your hypothesis suggests that vacuum space or "nothingness" might be evidence of a superluminous energy or vibration at the Planck scale that creates the apparent emptiness of space. This is a speculative notion that would require new theoretical frameworks beyond the standard model of particle physics and general relativity. It also implies that dark energy, the force behind the universe's accelerated expansion, could be related to this superluminous vacuum energy.
While current physical theories and experimental evidence do not support the existence of superluminous phenomena or energies, the history of science shows that our understanding of the universe is constantly evolving. Theoretical proposals that challenge existing paradigms are valuable for pushing the boundaries of our knowledge and prompting new avenues of experimental and theoretical investigation. However, any new theory that proposes mechanisms beyond established physics must be rigorously tested and validated against empirical evidence.
Courtney Seligman added a reply:
1. A vacuum is a region of space with no matter; a super vacuum could be defined in one of two ways, depending on whether it is a concept, or a description of current technology. In the first instance, it with be a region of space with neither matter nor energy (in which case, unless an extremely small region, it does not exist, because any part of space big enough to see without a microscope would at least have light of some sort passing through it (e.g., at least the Cosmic Background Radiation). In the second instance, it could be used to describe a "laboratory" vacuum which has far less matter in it than any previously created laboratory vacuum.
2. The Earth is in a region that is essentially a vacuum, because most of the space between the planets has practically nothing in it at any given time. However, there are cosmic rays and the Solar Wind everywhere, so though merely pieces of atoms, there is some stuff everywhere in space; but the amount is so small that for all "practical" purposes, it is a vacuum.
3. Dark energy is a fiction created by cosmologists to explain why, despite having too little mass for the gravity of that mass to fight the tendency of empty space to expand (per Einstein's General Theory of Gravity), the geometry of the Observable Universe is "flat", which would require something to add up to 100% of the "critical mass" of the Universe, and since visible and unobservable ("dark") matter makes up at most 27% of the critical mass, cosmologists created the concept of dark energy to make up the remaining 73%. However, there is no need to presume that the Universe is flat. Just as the Earth is a globe but looks essentially flat (on the average, and particularly at sea) because you can't see enough of it to see its real shape, the Universe is actually what is called "hyperbolic" in shape, which is exactly what you would expect if its mass is less than the "critical" mass. However, almost all cosmologists are convinced by various characteristics of the Observable Universe that the "real" Universe is at least 1000's and perhaps 10 to the 1000's of times bigger than what we can see, what we can see is too small to see its real shape, so it just looks "flat". Since by definition we can't see anything but the "Observable" Universe, we will never be able to see the true shape of the Universe; so "dark energy" will remain a "useful" fiction for calculation purposes for the foreseeable (if not infinite) future; but I am certain that we will never figure out what it is, because it doesn't exist. (Having been both a mathematician and a professional astronomer, I can assure you that even when something like "dark energy" doesn't exist in real life, creating a mathematical model that includes it, in order to make the math work right, is considered perfectly OK by professional mathematicians.)
Sergio Perez Felipe added a reply:
Introduction The ‘Theory of Everything’ is a hypothetical theory of physics that explains and connects all known physical phenomena into one. There is a possible solution to the origin of gravity force, postulating it as angular piece of this theory, this solution erases gravity as one of the fundamental forces of nature and unifies it with strong nuclear force. Let’s analyze the forces that occur in the universe transforming string theory. It allows to explain many physical behaviors that without its existence would be practically impossible to understand, even so, these strings have not been able to be discovered and are only that, a theory that serves as an important support to the world of physics. One of the best known theoretical applications about them is how their vibration can provoke the creation of matter, but this is not about theories already written, we are going to place these strings in a simpler way to answer some doubts in subatomic world. This theory uses 4 dimensions in space and a behavior as one dimension in strings with superconducting capacities. Like an elastic band between V-shaped sticks where the elastic band slides down, the strong nuclear force, forces these strings to bend to fall dawn.
It’s not directly related to electromagnetism. . Actors . String Theory String theory is a theoretical framework in which the point-like particles of particle physics are replaced by onedimensional objects called strings. Each string that we cross would be the minimum distance that can be traversed during a displacement. We can note two important qualities of strings: Distance to the most distant object detected by the human being is more than 30 billion light years, that means there are beams of light which are able to travel that distance without decreasing its speed (they modify only its wavelength). Like light, an object can move into space for a practically unlimited period, as long as it doesn’t find a force to stop it. If strings exist, they act as a superconductor of matter with a resistance near 0. In order to generate waves it’s easier into a strongly linked structure. Gravitational waves behave like ocean waves which are similar to an uptight net, these tensions can be decomposed as one-dimensional structure for its study. Strings, at same time, could be one or zero-dimensional, like points under extreme bound forces, think about them as something tenser than any cable that holds the heaviest bridge in the world. The new framework we have drawn would be a set of extremely tense strings, with a practically infinite matter conduction capacity. Remember we are moving into universe at a stimated speed of 600km/sc. Strong Nuclear Force Strong nuclear force is another variable. This force allows the atomic nucleus to remain together, being the strongest of the so-called fundamental interactions (gravitational, electromagnetic, strong nuclear, and weak nuclear). Gluon is in charge of this interaction, it has a scope not greater than 10 to the power of -15 meters, preventing matter to separate by a constant attraction force between quarks of maximum 10.000 N (F). This real picture illustrates the three dimensional structure of gluon-field configurations, describing the vacuum properties. The volume of the box is 2,4 by 2,4 by 3,6 fm.Contrary to the concept of an empty vacuum, this induces chromo-electric and chromo-magnetic fields in its lowest energy state. The frame rate into this example is billions of billions frames per second (FPS). Superconducting String Theory (SST): Fundamentals: superconductor of matter interacting with a force that makes that matter hold together, but, how can they interact with each other? The most simple is to think about two V-shaped sticks (simulating the strings), and an elastic band that tight them at the most opened side (it would simulate the gluon, with size 10 to the power of -15 meters). If sticks are sufficiently lubricated and tense, what does the elastic band do? It will slide to the thinnest side. More elastic bands, more force will be exerted on the sticks to join them, so next bands will slide even faster (equally, more mass causes more attraction). We are talking about unknown limits in known world, such as infinite conduction or tensions never seen in materials. Suddenly, we have erased one of the fundamental forces of nature, gravity force doesn’t really exist, exists the strong nuclear force interacting with strings. this theory ‘Superconducting String Theory (SST)’. Calculations: Apply formulas from inclined planes (Newton’s second law). Simulation is in horizontal direction. Friction is imperceptible and acceleration down the plane is matched with gravity acceleration in our planet. Vertical force is not gravity force, it is gluon force, which values ares estimated, so we keep force 10.000 N (F1) and mass 0,0002 eV/c2 (m2). It can be considered vertical angle, but it’s depreciable.Dark energy and universe’s expansion. The behaviuor of the strings implies to have any kind of polaritation to expand, at least, strong enough to avoid get closer and restablish its structure after any contraction. This strength propagates over long distances.Gravitational constant (G = 6,67408 × 10−11 m3 kg-1s -2) and its problem to measure with high accuracy since it can be related to the density exposed. Schrödinger equation, to describe how the quantum state of a quantum system changes with time, similar to Newton's second law. Planck's length (1,616229 × 10-35 m) which can indicate the distance between strings. Gluon size and its larger size far from earth. Black holes.and .....
Sergio Perez Felipe added a reply:
You can try my theory, is 50% strong force, 50% quantum vacuum.
It's really easy, it simplifies many ideas because I don't use dimensions and I use simple maths to explain all fundamental forces.
Is there an alternative approach to incorporate research methods into a book chapter without dedicating a separate section for methods? If so, how can this be achieved, and where within the chapter is it advisable to include the methods?
Reflexw software will generate .T data file for each process, and they have opened the data format. But, the processed .T file is stored with binary, and they don't give specific detail information with 1) the location of trace number ;2) the data file section, 3) the data format, i.e, 16 bit or 32 bit, and so on.. So, I can't read the file.
Have anyone tried to read the processed .T format data file? And give me some tips.
Thanks!
Has anyone run slide in Visium of tissue sections (Heart, kidney, brain) with CMC than OCT for spatial transcriptomics ???
This question is very important because its answer is “ don’t and impossible to contain Fossils. This is true and universally 100% accepted, but a new paper published in High Impact Factor (6.5) claims that fossils exist in Igneous rocks. The paper was written by six professors from very reputable universities.
Unfortunately when geologists read their papers realize that these authors don’t differentiate between “On, in, and Off” in their paper. They found all fossils on the surface blocks of deformed igneous rocks (fractured and jointed deep igneous rocks). instead of “on the surface locks of the igneous rocks” they used “in the igneous rocks” and they divided their paper into two four main sections which are titled :
Fossil record in igneous crust. Fossil record in igneous oceanic crust, Fossil record in continental igneous crust, A morphological atlas of fossils in igneous rock. These titles are all wrong because their discussion denotes that they found the fossils on the surface of igneous rocks not inside them. Therefore their paper is unacceptable and terrible for the academic community. Because fossils can exist on the surface of all types of rocks and man-made materials. Moreover, they did not find fossils between the crystals of igneous rocks, but on the surface deep blocks of broken igneous rocks.
I found the optimal antibody dilution and incubation time to stain cells of fish brain sections (50μm thick).
However I need to stain those cells in thicker brain sections and I was wondering what are the criteria to apply, if any, for the antibody dilution and incubation time so I can get results comparable to thinner section staining (i.e. Increasing the incubation time according to the thickness).
Looking forward to your feedback.
I used Natural Goat Serum (NGS) for my immunostaining protocol.
However I noticed that NGS is also combined with Bovine Serum Albumine (BSA).
Is there any advantage to use both instead of NGS only?
In your opinion, It is necessary to use both for your staining?
Looking forward to get your feedback.
To know the impact of secondary flow in mixed convection of water in circular pipe under an isothermal heat flux the heat transfer and fluid flow characteristics, the boussinesq hypothesis is done. What is the range of temperature in each cross section of the pipe when the boussinesq hypothesis is valid?
Hello colleagues! What sections of information security relate to classified data from the position of publication? And is there liability in your countries for their disclosures in publications?
how to count metastatic nodule in slides stained with H&E. different sections have different number of nodules. Should I add all or should I report the section that showed the highest number?
I recently used ChatGPT to rewrite some sections of a paper. It helped me speed up the more technical sections, as repetition is something hard to avoid when English is not your first language (at least for me).
I was happy to use it, but then I realized that it might diminish my credibility when people read that AI was used in the publication. Am I overthinking it?
What do you guys think?
I recently conducted staining on brain sections of adult zebrafish using Nissl stain. The brains underwent pre-fixation in 4% paraformaldehyde, followed by storage in 75% ethanol at -20°C for a period of time, before being rapidly frozen in methylbutane on dry ice in an OCT mold. Cutting was then performed at a thickness of 20 microns at a temperature of -15°C, using charged slides. The stained slides were mounted with DPX and left to dry at room temperature for three days.
Unfortunately, upon examination at 10X magnification, not the entire slice is in focus. I also attempted to use gelatin-treated slides instead of charged ones, which yielded only slightly improved results.
I suspect that these issues may be attributed to two factors: 1) inadequate adhesion of the brain to the slide due to insufficient stickiness of the slide, and 2) the formation of micro-bubbles between the slide and the slice during the cutting process.
Please share your experiences or suggestions regarding this matter. Thank you!
As i received a comment from a journal saying: Could you please add justification on why a cross-sectional approach is appropriate here?
I'm planning a project focused on studying osteocytes in the long bones of rats, requiring intact sections of both cortical and trabecular bone. To achieve this, I'm considering Kawamoto's tape sectioning technique for frozen blocks. My concern is about effectively adhering the tape, with the tissue section facing upwards, to ensure it remains attached to the glass slide throughout the entire FISH protocol. I'm interested in knowing if anyone has successfully used this technique for FISH applications with any type of tissue.
The topic must be a controversial issue facing the field of education specific to your specialization and not a problem that is addressed at the local level. The topic must be an issue or problem that has experts suggesting at least two (2) distinct sides to the issue, both presenting valid points of view. The candidate will be expected to complete an extensive and intensive discussion of all sides of the chosen issue, giving specific strengths and weaknesses of both sides with equal fervor. All points of view must be backed by appropriate research data, not just opinions, and the points of view must be given equal representation. The candidate must then address how they would deal with a disagreement among educational stakeholders on this issue. In a conclusion section, the candidate will present their own thoughts and ideas justified by supporting sources. Extensive use of primary source information will be expected.
I submitted a manuscript to a journal recently, and although their standard procedure involves two reviewers, I received feedback from three. While two of the reviewers provided positive feedback and suggested minimal or no changes, the third was notably critical. While I acknowledge and agree with some comments that could enhance the manuscript (though they don't represent significant shortcomings), certain remarks were either irrelevant or challenging to comprehend—potentially due to the reviewer's non-native English proficiency. Additionally, certain points he raised were already been addressed in the manuscript. That reviewer consistently labeled every section of the manuscript as “poor”, including the results section, which was carefully written following APA style guidelines, with a balance between statistics and descriptions to ensure readers neither felt overwhelmed by statistics nor missed key information. He recommended the rewriting of the manuscript and as a result, the editor has requested a revision. Is it common for reviewers to lack expertise in certain areas, leading to concerns that may not be considered constructive criticism? How much weight editors give to such comments in revision?
I need to visualize specific regions of mouse brain (e.g., substantia nigra). However, cryosectioning the entire brain tissue is resource intensive and not feasible. Is there a relatively quick and easy way to identify the anatomical location of previously cut brain sections without the use of staining or immunofluoresence?
One of the reviewer asked me to split results and discussion section, since I still want to continue the the article submission with combined results and discussion... please share your knowledge in this context. Thanks in advance.
How to calculate feeder size and number of feeders required to produce the following casting sound which has dimensions of 200cm length, 150cm width, 2cm thickness? And in the inner section 100cm length, 100cm width, 2 cm thickness is subtracted. casting si done with aluminium and feeder shape must be cylindrical with sleeves.
I am sectioning human / mouse tissue on a vibratome. At both 50um and 20um, the sections float off in surrounding PBS still with the surrounding agar layer attached, and I can't seem to easily remove it without disturbing the tissue.
This happens at both 4% and 10% agar. I have been using wet ice surrounding the PBS-filled vibratome chamber to keep everything cool, since at room temperature the sections did not cut well.
Has anyone else experienced this, or does anyone have any suggestions? Thank you.
effecting the slope of energy , velocity and discharge by sediment and water plants. the water plants take from cross section area and work on dissipation of velocity with vortex as well as negative and positive discharge. the negative discharge is Reverse flow water which subtracting from total discharge and decreasing the velocity also reduction of slope energy and water grade line therefore finding the water level is high but the discharge is low
Hi:
I am performing immunofluorescence staining on 100 um 4%PFA fixed pancreas sections using our self-made polyclonal primary antibody. Tissue was gradient dehydrated in methanol diluted in 0.2% NP40. The sections were blocked with 1% BSA, 4% FBS, and 0.1% Tween-20 in PBS for 1 hr. But end up with a poor staining result under confocal examination. Is there any possible solution? Thanks!
Greeting everyone. Here, I can face a problem. I need to activate the legend section, but I cannot do this. Please help me. How can I activate the legend section?
Here is the attached related file below. Please see this. Thanks in advance.
Dear ResearchGate network,
Recently I received an invitation to act as Associate Section Editor for Bentham Science for Current Bioinformatics for a period of 2 years, depending on de performance. Due I don’t have sufficient experience as editor, So, they requested me to given a name of a senior researcher with a h-index of 15 at least and knowledge in Bioinformatics to act together with me as a Section Editor (coeditor). As role of this coeditor is to propose at least one issue per year of a relevant theme for a special edition In Bioinformatics. Anyone here have knowledge from anyone who fits to this profile and could indicatr he/she for me, please?
I thank you in advance.
Pedro Paulo Gattai Gomes, PhD
I tried to set .itp section after charmm.itp section but did not work
Hello, what does each of the expressions of this algorithm mean in Ansys? For example, in this section, KEYOPT, 8, 9, 1, what is KEYOPT?
! contact between tibial articular cartilage and meniscus
MAT, 2
R, 3
REAL, 3
ET, 7, 170
ET, 8, 175
KEYOPT, 8, 9, 1
KEYOPT, 8, 10, 0
KEYOPT, 8, 12, 0
KEYOPT, 8, 5, 2
R, 3,
! generate the target surface
CMSEL, S, TCS, NODE
TYPE, 7
ESLN, S, 0
ESURF
CMSEL, S, ELEMCM
! generate the contact surface
CMSEL, S MMTIBR, NODE
CMSEL, A, LMTIBR, NODE
TYPE, 8
ESLN, S, 0
ESURF
How can I learn this language?
Thank you
I have created a 2d geometric model in order to investigate the fracture toughness of a reinforced composite using epoxy as a matrix.
I have inserted a predetermined crack, and defined it as a crack by inserting a cohesive team. I did this as it allowed me to assign a material to that crack section, as It was unsuitable to create a crack with defined thickness due to needing to see what becomes of the interaction between the particles and its matrix.
The problem I am facing is how to define the cohesive properties of this crack.
How do I define the three Nominal stresses, as well as the damage evolution?
I have been using cohesive element type within my mesh for the cohesive seam.
I have tried to troubleshoot to the best of my abilities but I am not sure where to start or where to get the necessary information from.
Any help would be incredibly appreciated.
Many thanks in advance.
edit: I have attached photos showing the cohesive properties ive assigned. if anybody could please highlight what is wrongly defined or what I may do instead to allow for my model to run accurately it would be greatly appreciated.
Hello,
I am using RNAscope® Multiplex Fluorescent Reagent Kit v2 on 14 mikrometer thick brain sections mounted on super frost slides. Recently, I encountered a problem which was not an issue before. After applying RNAscope hydrogen peroxide, bubbles appear on the sections. As far as I can tell, they form also underneath the sections. As a result, I lose if not all, most of the sections on the slides. I would really appreciate if you can help me to identify the problem and eventually solve it.
Thank you very much!
Best,
Firdevs
They are publishing papers from law, management department etc. But the scope of the journal are
Section A: Organic Synthesis and Chemistry of Natural Products
Section B: Pharmaceutical and Medicinal Chemistry
Section C: Analytical and Environmental Chemistry
Section D: Inorganic and Material Chemistry
Section E: Biochemistry, Chemical Biology and Biotechnology
Section F: Physical, Theoretical and Computational Chemistry
Section G: Chemistry of Industrial Processes
Section H: Quality Consideration accepting multidisciplinary Scope Manuscripts
The problem is hot extrusion of Ti-6Al-4V from circular section to ellipse section the solver is coupled-temp-displacement 4 steps are defined first one is for diagnosing the contact from ABAQUS by moving the material 1mm down and second one for moving the material to complete the extrusion and third one is defined to deactivate the interaction between die and part and fourth is defined to apply convection and radiation to part for simulating the cooling I'm using ALE method for this problem. Please help me to resolve the error. The error occurs in the second step where the ALE is defined. I have modeled 1/4 of the whole model to reduce the elements number.
Dear Experts
I downloaded and installed the Desmond Schrodinger software in my Linux system. installation was fine and when i tried to prepare the protein in protein preparation wizard I noticed that the icon for "Generate States" in review and modify section and "minimize" icon in refine section is not functioning. Kindly anyone of you please resolve this issue.
Dear Friends,
I want you to read section 4.2 of the following paper and comment.
Happy New Year
Gas Solid Reaction (Julian Szekely, James W. Evans, Hong Yong Sohn) is a good reference for gas solid reaction. In the section of chemical reaction control, that I have attached here, the equation converted to dimensionless form to calculate equation X according to t.
for calculate this equation, I must convert it to dimensionless form or there is another solution too?
A(g) + bB(s) ___> cC(g)
I have attached some photos of what I am observing when I cryosection P0 mouse brains. I have had this issue for a while, and have made many adjustments to fix it, but no luck....
For reference, here is the protocol I have been using:
-Dissect out whole P0 mouse brain at RT in 1x pbs
-drop fix brain in 4% pfa on rocker for one day in 4d fridge (I have used several different pfa solutions--lead to no resolution)
-graded sucrose solutions (15-25-30% sucrose in 1x pbs) over the span of two days in 4d on rocker (at the end of the 2 days, brains have completely sunk)
-embed brains in OCT using a mold in dry-ice bucket with lid (ensuring there is no interface between the brain and oct, i.e. drying off sucrose with KimWipe before swirling brain in oct)
-storing brain in -80 until cryosectioning
-place brain in cryostat ranging from -18 to -20 (i've tried different cutting temps--no luck)
for 30 mins prior to sectioning
After all of this, my sections themselves are coming out flat, but the area containing the tissue is either bunched up, or has a large hole. Also, the brain on the mounting chuck looks dry/flaky. I originally thought that sucrose was not diffusing through the tissue and displacing the pfa, so I have adapted a graded sucrose protocol, which I've done at RT and 4d and no luck. I have tried thawing with thumb before each section, and no luck...
Really any help is appreciated!!
Any chance that a 1xpbs solution with a ph of 7.7 would be an issue? I figured no because it's a buffer, but possible this could be the issue ?
I modeled the 2D frame with OpenSeesPy in a way that the concrete class is variable, there is a distributed load on the beams and horizontal load on only 2 nodes, I analyzed the statics in this way, but I am getting an error in the analysis part.
My modeling steps are very similar to the OpenSeesPy 2D Portal Frame example:
However, while I was doing the analysis using eigen in the example, I did not use eigen. I would like your comments.
import time
import sys
import os
import openseespy.opensees as ops
import numpy as np
import matplotlib.pyplot as plt
m = 1.0
s = 1.0
cm = m/100
mm = m/1000
m2=m*m
cm2=cm*cm
mm2 = mm*mm
kN = 1.0
N = kN/1000
MPa = N/(mm**2)
pi = 3.14
g = 9.81
GPa = 1000*MPa
ton = kN*(s**2)/m
matTag=1
for i in range(0,8):
# remove existing model
ops.wipe()
# set modelbuilder
ops.model('basic', '-ndm', 2, '-ndf', 3)
L_x = 3.0*m # Span
L_y = 3.0*m # Story Height
b=0.3*m
h=0.3*m
# Node Coordinates Matrix (size : nn x 2)
node_coords = np.array([[0, 0], [L_x, 0],
[0, L_y], [L_x, L_y],
[0, 2*L_y], [L_x, 2*L_y],
[0, L_y], [L_x, L_y],
[0, 2*L_y], [L_x, 2*L_y]])
# Element Connectivity Matrix (size: nel x 2)
connectivity = [[1,3], [2,4],
[3,5], [4,6],
[7,8], [9,10],
[7,3], [8,4],
[9,5], [10,6]
]
# Get Number of elements
nel = len(connectivity)
# Distinguish beams, columns & hinges by their element tag ID
all_the_beams = [5, 6]
all_the_cols = [1, 2, 3, 4]
[ops.node(n+1,*node_coords[n])
for n in range(len(node_coords))];
# Boundary Conditions
## Fixing the Base Nodes
[ops.fix(n, 1, 1, 1)
for n in [1, 2]];
fpc = [30,33,36,39,42,45,48,50]
epsc0 = [0.002,0.002,0.002,0.002,0.002,0.002,0.002,0.002]
fpcu = [33,36,39,42,45,48,51,54]
epsU = [0.008,0.0078,0.0075,0.0073,0.0070,0.0068,0.0065,0.0063]
Ec=(3250*(fpc[i]**0.5)+14000)
A=b*h
I=(b*h**3)/12
ops.uniaxialMaterial('Concrete01', matTag, fpc[i], epsc0[i], fpcu[i], epsU[i])
sections = {'Column':{'b':b, 'h':h,'A':A, 'I':I},
'Beam':{'b':300, 'h':500, 'A':300*300,'I':(300*(300**3)/12) }}
# Transformations
ops.geomTransf('Linear', 1)
# Beams
[ops.element('elasticBeamColumn', e, *connectivity[e-1], sections['Beam']['A'], Ec, sections['Beam']['I'], 1)
for e in all_the_beams];
# Columns
[ops.element('elasticBeamColumn', e, *connectivity[e-1], sections['Column']['A'], Ec, sections['Column']['I'], 1)
for e in all_the_cols];
D_L = 0.27*(kN/m) # Distributed load
C_L = 0.27*(kN) # Concentrated load
# Now, loads & lumped masses will be added to the domain.
loaded_nodes = [3,5]
loaded_elems = [5,6]
ops.timeSeries('Linear',1,'-factor',1.0)
ops.pattern('Plain', 1, 1)
[ops.load(n, *[0,-C_L,0]) for n in loaded_nodes];
ops.eleLoad('-ele', *loaded_elems,'-type', '-beamUniform',-D_L)
# create SOE
ops.system("BandSPD")
# create DOF number
ops.numberer("RCM")
# create constraint handler
ops.constraints("Plain")
# create integrator
ops.integrator("LoadControl", 1.0)
# create algorithm
ops.algorithm("Linear")
# create analysis object
ops.analysis("Static")
# perform the analysis
ops.analyze(1)
# get node displacements
ux = ops.nodeDisp(5, 1)
uy = ops.nodeDisp(3, 1)
print(ux,uy)
print('Model built successfully!')
Yesterday, I shared the link to this article with several other RG members, and some of us have posted comments there. But I think it might be a lot easier to continue the discussion here (in a regular discussion thread) than it is via the Comments section there.
Here is the link to the article:
Cheers,
Bruce
Hi,
I came across an issue when graphical results do not match the optimal reconstruction in the list section in RASP v4.0. In the graphical section, I would have 50:50 division of a nod pie, color coded as 50 state B : 50 state G. But in the List section the same nod gives node 66: F 98,48 C 0,51 G 0,51 B 0,51 BE 0,00 BEF 0,00 ....
Meaning I would expect the graphical result to be pie of the majority of F state.
Any idea why that could be?
Thanks,
Karolina
Not able to find the call for chapters section.
Need someone to please guide /help.
Are the attached images Bryozoa?
Some images look like to have Bryozoan Zooarium, some like to be Trepostomatid Bryozoa, They Probably belong to Permian in one of the images an axial section of probably a fusulinid with apparent septa is recognizable
The liver sections look fragmented, how could the histological technique be improved to better observe this tissue under the microscope?
The cells mentioned here is present both in outer and inner phloem
Hi!
I'm working with post-fixed mouse brain tissue in 4% paraformaldehyde, then 30% sucrose and OCT inclusion for cut in cryostat. During the cutting process I didn't have problems, althoug I did notice the sections were rolled up very easily. The sections (20 um) were stored in antifreezing solution (with glycerol and etylenglycol) until its use. When I do immunohistochemistry, sections are already rolled and become very fragile and break easily, especially the second day. Also, looking at them under magnification, the tissue does not look in the best condition. Could it be a problem with the post-fixation process? If so, shouldn't it have given me problems when I cut it?
How could it be solved? Thank you!
After sectioning a couple of blocks of tissue embedded in OCT, I realized that they aren't positioned exactly right. It would be difficult to mount the blocks on the chuck to section them in the correct position. Does anyone know if it would be possible for me to thaw the remaining tissues to reposition them?
I have several deep-sea bivalve specimens, and I have observed that the staining efficiency of Mutvei is not satisfactory in the shell sections. Only a small portion of the growth lines appear clearly stained. Could someone provide me with potential factors that may be influencing this issue or suggest some improvement measures
- There is a growing trend in research articles' authorship varying from five in number to anywhere around 15 which I have noticed. How are these articles cited in-text (as the general rule was to cite all authors the first time and then use et al. and in the references section?
Hi everyone, i have a question regarding my dataset and would be grateful for any help:
I have a dataset i downloaded online that gives me information about variables like Pool size, Dealvolume etc. Every observation is a different tranche that is part of a deal (a deal may contain multiple tranches). The Tranches are included at the time the deal is launched. The dataset contains every Tranches from the last 10 years. From my understanding panel data is data collected at multiple times. The problem of understanding i have is that every observation (tranche) is just collected once, at launched data. Although the dataset contains observations over the last 10 years, so i collect data over time (-> panel data) but not for the same entities, because a tranche is part of a deal that is observed when the deal launches. So looking at tranches in year 1 and year 4 they're not the same tranches, as they are part of a deal.
I hope i managed to explain what i mean. I have problems to distinguish if it's a panel data or a pooled cross sectional data or maybe something else?
Help is really much appreciated. Kind regards.
The Digital Transformation in Higher Education is related to a robust, resilient adoption of technologies, processes, strategies and workflows. Different stakeholders are also partnering in the orchestration of efforts for a next generation era of Higher Education with emphasis on new technologies, innovative administration models and innovative services for learners and faculty. In the same context Artificial Intelligence and other emerging technologies set new challenges and bring new social impact to the traditional and well-established perceptions of Higher Education.
In this book, there are seven sections providing a unique value proposition for the relevant area:
- SECTION A: Foundations of Digital Transformation in Higher Education
- SECTION B. Enabling Technologies for Digital Transformation in Higher Education (Robotics, Collaborative Learning, MOOCs, Mobile Learning, Cognitive Computing, Learning Analytics, Social Media, Artificial Intelligence etc.)
- SECTION C. Innovative Learning Strategies for Digital Transformation in Higher Education
- SECTION D. Active, Transformative and Technology Enhanced Learning in Higher Education
- SECTION E. Case Studies, Lessons Learnt and Research and Development Projects of Digital Transformation in Higher Education
- SECTION F. Next Generation Artificial Intelligence Enabled Digital Transformation in Higher Education
- SECTION G: Higher Education 2050 Vision
This edition can serve as a reference edition as well as a teaching book for postgraduate studies on the relevant domain.
In the first two sections we elaborate on the epistemology of Digital Transformation in HE and also on the enabling Technologies. A comprehensive coverage of the most recent developments on Education technologies including Robotics, Collaborative Learning, MOOCs, Mobile Learning, Cognitive Computing, Learning Analytics, Social Media, Artificial Intelligence etc, will allow reader to follow up the most influential techs of our times and to associate meaning for its potential value and impact to HE institutions.
The third and forth sections are about learning strategies with emphasis on Active and Transformative, Technology Enhanced Learning. The fifth section is related to Case Studies, Lessons Learnt and Research and Development Projects of Digital Transformation in Higher Education. It is used to host diverse approaches, complementary innovations and best practices and benchmarks in the domain of DT in HE.
The sixth section is dedicated to a comprehensive discussion of the Artificial Intelligence as an enabling technology of a fully disruptive educational technology with diverse impact and implications in Higher Education.
In the last section, we provide insights for strategic policy making in higher education with emphasis on the Vision of the Next Generation Higher Education 2035 The objective is to promote a new human-centric vision in universities and colleges linking education to sustainability and prosperity for all.
Important Deadlines
Abstract Submission: November, 30, 2023
Full Chapter Due: February, 25, 2024
Final Chapters (review comments integrated): April, 15, 2024
Pulbication Day: Q4, 2024
Editors:
- Miltiadis D. Lytras , Effat University, Saudi Arabia
- Andreea Claudia Serban, Bucharest University of Economic Studies, Romania
- Afnan Alkhaldi, Arab Open University Kuwait Branch, Kuwait
- Tahani Aldosemani, Prince Sattam bin Abdulaziz University Saudi Arabia
To be objective and brief, let us refer to my personal experience
in the Q/A section of Researchgate:
On October 7, 2023, a response came from an unemployed immigrant in Porto as follows:
your answer reflects the fact that your understanding of physics needs to be adjusted. !?
3 weeks later, on October 29, the same hunter (I...M...), encouraged by the silence of the RG Q/A team, came back with a poor answer to my question:
, your question reflects your ignorance.!?
Do you have any suggestions?
I have cross sectional data set and i want to run a CDM.model to identify determinants of technology and innovation
I make the skin longitudinal frozen section. leica machine
I conduct the temperature -30℃,but when I start section the temperature will be -23~-27 ℃,this temperature mistake?
dont know why only around tissue appear this circumstance
Base on paper
I have a bi-level max-min problem, the lower level is a single-level minimization linear program and is always feasible, through strong duality we can obtain a single-level maximization.
I can't understand the dual problem of the main paper(section 4.2 of the paper) so I get the dual of lower-level on my own and it's different from the main paper.
I uploaded the primal problem and dual problem please let me know if my dual problem is wrong or explain the dual problem of the main paper(section 4.2 of the paper).
A structure consisting of three portal frames has proposed to have two of the three portal frames dismantled and have a multi-story superstructure built in its place with a basement.
With no experience in the geotechnical field I am unaware of the approach to take of building the superstructure alongside the remaining portal frame without encroaching on existing foundation supporting the portal frame - destabilizing the soil.
My initial thoughts were to either underpin the existing foundation and pour a combined footing at the basement level to support the existing and future building, or alternatively building a meter away from the existing building and cantilever a section of the new building.
Any opinions and thought would be much appreciated,
Thank you in advance for your time.
I am attempting to corroborate the efficacy of my computational method of determining molecular isotropic polarizability by comparing with known experimental values. It is my understanding that the best way to do this is to convert the experimental refractive index to polarizability via the Lorentz-Lorenz equation as such: p = (3/(4*pi*N))*((n^2-1)/(n^2+2)). However, I am unsure as to how to obtain N for such an equation. I have seen "N is the number of molecules per unit volume," but I am unsure how to find this. I collected refractive indices from PubChem (e.g. https://pubchem.ncbi.nlm.nih.gov/compound/10659#section=Density). Should I simply use the density of the solid and its molecular weight? This seems problematic as the density has a wide range of values, as does the refractive index.
May I get any idea about the development of homogeneous road section and its limitation
Hello everybody,
I could not simulate wind speed fluctuation by assigning turbulence intensity (TI).
I simulate the fluid-structure interaction by using STAR-CCM+ and Abaqus, and I simulate the flow field of through IDDES. I set field functions to assign inlet velocity and turbulence intensity, respectively. Some points are set to record the time history of wind speed in freestream direction. However, results show that the wind speed do not fluctuate obviously (i.e. the wind speed just slightly vary with time). I calculated the turbulence intensity (TI) according to the definition of TI. The TI was 0.0001%, indicating that the wind speed was almost constant.
I set “Synthetic Turbulence Specification” from “none” to “intensity + length scale” in “continuum” and “region-boundary -inlet-physical conditions” when I just use STAR-CCM+ and do not use Abaqus. In this case, the wind speed vary obviously with time. However, there was warning when I set “Synthetic Turbulence Specification” in fluid-structure interaction:
“The Synthetic Turbulence Specification applied to boundary ‘S.Con.inlet’ in region “flow”, is not compatible with the Motion Specification in that region.”
The Synthetic Turbulence Specification should be suppressed when the mesh morphing of flow field was considered.
The setting of regions and simulation are shown in attachments.
The computational region was 1000 m (x) *500 m (y) * 500 m (z)
The section of structure was 6 m (x) *6 m (y) * 50 m (z)
Is there anything I forget to check? How can I get a certain level of turbulence at a specific spot in the field through inlet boundary specification?
Any feedback on the approach/idea itself is welcome too.
+5
A research article is a primary source of information that reports on the findings of an original study. It typically includes an introduction, methods, results, and discussion sections. The purpose of a research article is to communicate the findings of a study to other researchers and the general public.
A review paper is a secondary source of information that summarizes and synthesizes the findings of other research articles. It typically includes an introduction, literature review, discussion, and conclusion sections. The purpose of a review paper is to provide an overview of the current state of knowledge on a particular topic and to identify areas of future research.
I hope, this article helps you to know about the different types of research articles accepted by journal publications.
What is the best mathematical technique used to derive the equation of the compound cross section of the real irrigation channels?
I have created a rectangular model in ABAQUS, and I am trying to create Oblique sections within the model so that I can assign different material properties to each section in order to create anisotropy.
But, when I am done with the sketching part and trying to proceed further, the error pops up saying, "section must be closed for this type of feature".
I have just created oblique lines at 30degrees angle through the oblique tool in the rightside toolbar, this error appears only when there lies any open edges highlighting the vertices which are open, but I am unable to understand how are the edges open If the rectangular model is already closed.
Pls help!
I am a newbie to latex. I was trying to use references in latex.
I was attempting to have the author's name and the year of publication appear in my literature section in place of reference numbers (for example, [1] --> XYZ1 (2007)). Additionally, I don't want square brackets in the reference section (ex. [1] --> 1.).
I am trying but am unable to sort this issue.
I have attached the file.
I've been recently reading the GraphSage paper, and I'm having trouble understanding the "Proof of Theorem 1" in Appendix D. I couldn't find any explanations regarding this part. I was wondering if anyone has insights or if someone could explain the theoretical details of this section to me. I would greatly appreciate it.
Which factors really lead to a significant difference in 2 cross section values?
I am using this protocol to stain intraepidermal nerve fibers in epidermis of a paraffin embedded 3 mm skin sections
I first deparrafinized and hydrated tissues, applued citrate buffer ph 6 antigen retrival in water bath for 1 hour
Incubate with 10% normal goat serum for 1-4 hours, primary polyclonal rabbit antibody against pgp 9.5 (1:50) for 24 hours at 4 degrees, secondary biotinylated giat anti rabbit (1:100), h2o2 3% block for 10-15 min, streptavidin-hrp (1:50-100) and fresh DAB and counterstain in hematoxylline
What is wrong? Why keratinicytes falsely stain with DAB?
N. B I diluted the primary, secondary and streptavidin-hrp in 1x PBS with 0.5% Tween-20
What new I brought to the construction that can't be questioned by anyone
The earthquake imposes a force on the structure and the structure resists that force with the cross-sections of the load-bearing elements.
The cross sections of the beam plates and wall and column sections have some strength and then they break. If we increase the cross sections and reinforcement to increase the strength of the structure without the earthquake being stronger, the seismic loads also increase because they are tied to the mass of the structure. Greater mass = greater loads with the same acceleration.
Conclusion. The structure is saturated with reinforcement and concrete and that's the limit of its strength. No more.
This strength that the structure currently has only responds to small earthquakes and medium earthquakes with some post-earthquake damage repairs. No structure in the world can withstand very large earthquakes.
And I come to the civil engineers and say.
If you increase reinforcement and concrete to increase strength it is futile because it increases along with strength and seismic loads.
You have to increase the strength without increasing the mass.
And I give them two solutions not one to increase strength without increasing mass.
First solution. We take force from the ground, that is an external force, which has no mass because it comes from the ground and we use it to help the cross sections additively 1+1=2 to cope with the earthquake.
Second solution. In the earthquake 30% of the cross section of the concrete wall is active. The other 70% is inert, offers nothing and only does nothing but increase as a mass the earthquake forces. If we apply prestressing the whole concrete cross section becomes active and contributes to the earthquake without increasing inertia.
I need to stain mouse brain sections with c-fos and phospo s6 (which is rabbit). I'm looking for a c-fos ab (mouse) because I don't want to perform a multiple immunolabeling with primary ab originating from the same host species.
Hi all
Is there anywhere on Research Gate where we can add a call for papers? I don't want to add it as my 'research'. Is there perhaps a forum that we could use?
thanks
Does anybody here know how to use ImageJ to determine the OD sections of Luxol Fast Blue stained? I need aid.
I wanna compare the spare myelin in each group quantitatively using dyed spinal cord slides.
I appreciate all researchers' assistance.
After de-parafinising can I store the sections before proceeding with staining ?
Should I store in water or PBS ? Can I leave it at room temperature?
Here we see in the first video a very strong house with dynamic walls that simply presses against the seismic base just as buildings today simply press against the ground, which at the slightest shift tends to topple over entirely. Why does it topple and not break in two? Because the walls are very strong, stronger than the unsupported loads of the building.
A tipping moment in one direction rotates the building around on itself and another opposite direction moment coming from the now unstable static loads of the building suspended in the air oppose each other. If the cross-sections of the vertical walls can withstand this contrast of moments, then the structure will either overturn or return to its original position intact without damage. If they do not, it will be cut in two.
See video that withstood the two opposing moments. https://www.youtube.com/watch?v=Ux8TzWYvuQ0
If we bolt the same building onto the seismic base or onto the ground, then it will neither topple nor lose its support from the ground so there will be no opposing moment from the unstable static loads since it will not topple. So nothing will happen to it or it will not tip over. See video https://www.youtube.com/watch?v=Q6og4VWFcGA
Now let's remove the strong walls and leave it with weaker corner walls and a beam at the top, and do the same experiment again without bolting it to the seismic base or ground.
We will see after some oscillations that the contrast of the two moments coming from the overturning moment and the unsustainable static loads broke both the beam and wall sections on the node.
Let's now repeat the last experiment with the corner walls and the large beam on the roof but bolting the wall faces to the seismic base or to the ground to see if it will break again as the previous one did.
Not the slightest damage, although the acceleration that shook it was three times the acceleration of the other one that broke.
Conclusion
If we bolt the building to the ground, then even during the earthquake it does not lose the support of the ground so its own unsustainable loads that broke it no longer break it because the ground supports them.
Civil engineers will say that their own buildings don't rise from the ground either.
That is correct.
But they don't lift not because there is no overturning moment in their own buildings but because the weak cross-sections of the beams are unable to lift the structure in the air and they break before they lift it like a reed breaks when a big fish bites it.
The house is not destroyed by the earthquake but by its own weight resisting the rotation of the momentum.
If you bolt it to the ground the overturning moment is deflected into the ground and the structure will not be damaged.
If the fish bites a little then the reed beam will get away with it due to the fact that it has some elasticity.
If the bite is sharp and strong it will break
Following fixation for 18-24h I store the whole brains in PBS containing Na-azide at 4C. On the sectioning day, I placed the brain on the freezing stage using couple drops of OCT applied to the base. Then I gradually put additional OCT to cover it. I start sectioning when the OCT turns into white and set the temperature to -18 -20. My main problem is the sections are rolling and impaired.
Thanks in advance
There is syntax error in the math section of "sdevice" in TCAD, and as a result, the program cannot be executed.
MATH {
Extrapolate
Derivatives
RelErrControl
Digits=5
NotDamped=100
Iterations=25
Transient= BE
Method= ILS
}
It seems like an error has started occurring suddenly, starting from the "Extrapolate" section. While you made some structural changes to the component, it didn't affect other modifications. You've considered adding the "Quasis" section to the solved section based on your research, but there seems to be inconsistency compared to the previous behavior.
Where could the issue be coming from?
I saw the effects of sleepiness in the meditation group in the results section. The conclusion drawn by the author in the abstract is: 【follow‑up analyses showed that sleepiness uniquely moderated the effect of meditation on the LPP, such that less sleepiness during meditation, but not the control audio, corresponded to smaller Lpps to negative images】. Does this mean that sleepiness should be a moderator? However, in the presentation of the results, it seems that only the interaction of sleepiness, group, and valence was reflected. How should this be understood? I always get confused between interaction effects and moderation.
The result part: 【To parse the three-way Valence X Group X Sleepiness interaction, Time was collapsed across Valence and the data was split by Group. To test whether the interaction between Sleepiness and Valence differed as a function of Group, follow-up correlational analyses involving Fisher r-to-z tests were conducted to compare the relationship between sleepiness and the LPP at each valence across groups. Analyses revealed a significant positive correlation between sleepiness ratings and the LPP on negative (r = 0.25, p = 0.014, 95% CI [0.05, 0.45]) but not neutral trials (r = 0.11, p = 0.290, 95% CI [-0.09, 0.31]) in the meditation group. In contrast, sleepiness ratings were unrelated to the LPP on neither neutral (r = 0.07, p = 0.473, 95% CI [-0.13, 0.27]) nor negative trials (r = -0.10, p = 0.313, 95% CI [-0.30, 0.10]) in the control group. Critically, the Fisher r-to-z tests yielded a significant group difference in only the correlation between sleepiness and the negative LPP (z =|2.45|, p = 0.014), but not the neutral LPP (z =|.24|, p = 0.81). Put more simply, less sleepiness during the guided meditation was uniquely related to the predicted smaller LPP on negative trials; see Fig. 4 for waveforms separated by group and sleepiness.】The article is: https://www.nature.com/articles/s41598-020-71122-7
Many thanks in advance!
Following fixation for 18-24h I store the whole brains in PBS containing Na-azide at 4C. On the sectioning day, I placed the brain on the freezing stage using couple drops of OCT applied to the base. Then I gradually put additional OCT to cover it. I start sectioning when the OCT turns into white and set the temperature to -18 -20. My main problem is the sections are rolling and impaired. Thanks in advance.
I have 50 micron thick sections of a rat brain which I am trying to stain with primary antibody AT8 (for p-tau). I stained at 1:500 dilution overnight followed by a 3-hour incubation with secondary antibody. Upon imaging I saw that the AT8 had stained only the top and bottom surfaces of the tissue section. I then incubated another section with 1:500 AT8 for 2 nights on a shaker, hoping it will improve tissue penetration of the antibody. But that did not help either. Note that these sections had not undergone antigen retrieval. Does anyone have any experience with IHC staining of such thick sections?
Hello, I'm trying to model my pipe elbows using a shell element in sap2000, in order to capture the ovalisation effect. I drew the pipes using the frame element with a pipe section therefore it's really hard to assign the the area section (shell section) to the the frame element. Does anybody here know how I can achieve such a task?
Thanks in advance.
#sap2000 #piping
I am currently performing immunohistochemistry on mouse brain sections with GFAP (goat) and Iba1 (rabbit) primary antibodies to visualize astrocytes and microglia.
Today was day two of my IHC. Here are the steps I took in case any of it is relevant:
1) I took my sections out of primary antibody, washed them in 1x KPBS 5 times for 5min each.
2) Placed sections in diluted secondary antibodies (anti-goat Cy5, anti-rabbit 488, diluted 1:500 in 2% normal donkey serum) for 2 hours.
Here is where I was distracted directing multiple people at once... The following were the next steps I SHOULD have taken.
3) Take out of secondary antibody, wash again in 1x KPBS.
4) Add DAPI
5) Briefly wash in 1xKPBS
6) Mount onto slides, coverslip
I took my sections out of secondary antibody and immediately mounted them onto slides. The sections are currently in a dry, dark drawer, dried onto slides. I did not yet coverslip.
Any chance I could rehydrate them with 1xKPBS, slide them off the slides, then wash with 1x KPBS and subsequently add DAPI, like normal?
Any other suggestions, ideas, tips are much appreciated!
As part of the characterization of plant fibers, considering the non-uniformity of the cross section along the fiber, what should be considered to calculate the breaking stress? the middle section resulting from several measurements? weakest section? or using the variation of the deformation energy?
I wonder the there was ever an experiment done, which investigated if the neutron capture cross section of a nucleus depends on the spin orientation of the neutrons. Can anyone name a reference? I am not a nuclear physics expert and I find it hard to find publications about this.
Is it even possible to separate neutrons with different spin directions, just like in the Hydrogen maser?
Thanks, Daniel
I read this article a few days ago when I started my fitting and was partly confused. This is because in the article they say that Chi-square and Chi-square/|Z|^2 are both parameters that indicate goodness of the fit.
In addition, at the start of the article, it is mentioned that lower the Chi-square, better the fit.
However, as we move to the latter parts of the article, specifically the complex equivalent circuit section, they compare the Chi-square/|Z|^2 between Figure 7 and 8 to judge the “goodness” of the fit and not X^2 (which as observed from Figures 7 & 8 is quite high).
This is the dilemma that I am in.
Which parameter is a better indicator of the fit Chi-sqaure/|Z|^2 or the Chi-sqaure and why? Are there specific conditions when one parameter is a better indicator than the other?
Kindly share your thoughts.
Currently my EIS fitting is yielding high Chi-square values (in order of e9-e12), however the Chi-square/|Z|^2 is around 0.01 (Similar to what is indicated in Figures 7&8 of the article).
#EIS #Corrosion #Coatings #EIS fitting #BioLogic #EC-lab
Example opening statement:
Parent education to understand and navigate ASD has been noted as integrally important for supporting individuals with ASD.
Example closing statement:
In recognising and responding to the need for ASD capacity-building at the micro level, the next section will discuss meso-, exo- and macro-level responses to foster change in the context of Eswatini.
Hello. II am using EVIEWS 12 and i am running the CADF unit root test as my variables suffers from cross section dependency. I tested each variable but the result was for each cross section. How to interpret the result of this variable? how to write it in a paper if i have large number of cross sections? thank you
If I have sociodemographic factors that might have an impact on the study, even if they aren't explicitly mentioned in the stated research objectives, should I avoid discussing them in the discussion section?
TLDR - I've been working on creating frozen sections of bone. Norland Optical Adhesive 63 (UV curing agent) always turns white during immunofluorescence protocol, and my sections become practical unusable. Any suggestions for preventing this?
The process is as follows:
fixation in 4% PFA for 48 hours, cryoprotection in 30% sucrose overnight, then embed in OCT and section block. We do not decalcify our bones for frozen sections. As such, we have to use the tape transfer system in order for sections to stay on the slide. For sectioning, I have been putting a very thin layer of Norland Optical Adhesive 63 on the slide, transferring the section of bone (15um) to the slide via tape and rolling it out, flashing the slide 2x with UV and letting dry at RT for ~5 minutes before taking the tape off. I store in -20°C. I want to stain these sections using immunofluorescence, but every time I incubate the sections in primary or leave them in a humid/moist environment, everywhere the glue was turns white. I cannot see my sections well and my staining looks terrible because of it.
Has anybody had any experience with this problem and knows how to prevent the glue from turning white? I've attached a picture as an example.
Assuming I have cited some previous research in my background, will it be acceptable for me to repeat the same citations under the literature review section?
I have some papers in "in-between" journals which are not peer-reviewed, but are more than strictly "popular." I have been putting them under a "semi-academic" header on my C/V, but I was wondering if there was a better way of putting this. Ideas that came to mind are "Scholarly, editorial review" and "Professional Publications." I am a PhD theology student, so it is relatively normal to also write to a wider audience than academics reading peer-reviewed journals.
Thank you!
Hey everyone, has anyone ever performed immunohistochemical staining followed by histochemical staining on the same histological section (like a double labeling)?