Science topic
Running - Science topic
An activity in which the body is propelled by moving the legs rapidly. Running is performed at a moderate to rapid pace and should be differentiated from JOGGING, which is performed at a much slower pace.
Questions related to Running
I understand that it is possible to estimate the capacity of a certain soil using the freundlich and Langmuir models. I am sure that running this models is quite an acquarate approach to determine P fixation.
I decided to ask this question, because I feel that for farmers it is a very complex approach. Therefore I was wondering if there is a different process to follow that would be more in-practice oriented situations?
I'm trying to install packages for my masters thesis. When I try to install the package 'foreign' it works but when I try to run it, I receive the message:
Error in .helpForCall(topicExpr, parent.frame()) :
no methods for ‘foreign’ and no documentation for it as a function
How do I fix this? I have R version 4.4.0 on Mac OS 11 or 12 I believe.
hi everyone,
I have three objectives in the protocol.
My first objective is to calculate the incidence of specific illness presentations to the emergency departments.
Just to be sure if I have inclusion and exclusion criteria in the study, then should I do the first objective before running inclusion and exclusion criteria?
Because to calculate the incidence accurately, it's essential to include every individual diagnosed during the study period.
Thanks
I am performing western blot and recently i have been obtaining faint bands for the samples i had already run and had got darker bands. I wish to determine the concentration of protein in those samples, but they are now gel-ready (loaded in laemmlli buffer). can anyone please suggest a way.
Hello,
This is my VERY first time attempting a NEB calculation, and I must admit, I'm feeling quite confused. I would sincerely appreciate any feedback or suggestions. Here's the issue:
Following the instructions provided, I have two initial and final optimized geometries, represented by POSCAR1 and POSCAR2. Then, I used the nebmake.pl script to generate directories corresponding to the number of images.
In my case, I used 3 images, resulting in the creation of 4 subdirectories numbered 00, 01, 02, 03, and 04. Each subdirectory contains the respective POSCAR file.
Currently, my parent directory looks like this:
directory_00
directory_01
directory_02
directory_03
directory_04
INCAR
KPOINTS
POSCAR1
POSCAR2
POTCAR
I also placed OUTCAR files in the initial and final structures' subdirectories, i.e., directory_00 and directory_04.
Now, ideally, I should proceed with running my NEB calculation, correct? However, after submitting the job on VASP, I kept encountering an error message stating "POSCAR: No such file or directory."
Then, I attempted to run VASP individually for each subdirectory. For instance, let's consider directory_01. For this step, I have all four input files in this directory. However, after running the VASP calculation, I encountered another error: "forrtl: No such file or directory."
Additionally, while I was going through the instructions for installing VTST codes, I noticed specific guidelines for compiling VTST code into VASP. Currently, I'm using VASP version 6.4.2.
The instructions involved downloading the vtstcode-199. They included the following steps: "To build the code, the VASP .objects and makefile need to be changed. Find the variable SOURCE in the .objects file (a hidden file in src/), which defines which objects will be built, and add the following objects before chain.o
I cannot locate the .objects file anywhere!
This doesn't make sense to me at all. Have I misunderstood the process, or is there something else going wrong?
My nanopore sequencing run has generated some ''unclassified reads''. Can anyone explain what causes them to be unclassified and how to avoid them in future?
Been trying to find a way to automate the process of identifying the gas exchange threshold when plotting VCO2 over VO2. Most seem to do it visually but a 2019 paper (PMID: 31699973) used MATLAB code to identify the inflection point (lsqcurvefit) and run linear regression above and below the inflection (film) to identify a similar threshold while plotting VO2 over watts. Any suggestions/input would be helpful.
I am running MaxQuant and it starts running and almost immediately stops. I go into the error folder (combined-->proc, then select Configuring 11.error). The screenshot of the error is attached. I had converted .d files from an agilent system to .mzml as I couldn't get MaxQuant to recognize the .d file for data. I also went into global parameters-->advanced and unchecked the use of .NetCore as the net core was throwing an error and I found that doing so helped others in the same boat. The data files and fasta files are all in the same location.
Hello, I use ANSYS explicit dynamics to simulate something and after running the simulation, I get this error message. How can we solve it?
Thank you!
I am not sure these phones are commercially available yet. 6G is only available on the 15 Pro. I've no experience with Matlab. My current service provider has the UTube app which let's me see these type plots as videos from podcasters.(Typically academics in Math Departments). An example would be a transcendental equation involving one variable.
A 2020 solution to the interior tethered goat problem is the layman's description. The goat's "range" is symmetric and lobbed. Similar functions abound in nature . In some instances 3D solutions are easier. To not consume RAM I'd target 6 or so radians ie (2 x 3.141)/6 .
I would like to optimize my extraction using responce surface methodology but I‘m perplexed in choosing what design should I use? I just have 2 factors with 2 levels each, so I considered using CCD but I don’t understand the basic of the run test results which have some replication or about the lack of fits (is this crucial?). If I change it to no replication or minimizing lack of fits, is it ok? Or should I consider another design?
thank you very much for your kind response
I am running a DNA PAGE after PCR (samples 6-15 are run in duplicate with the second sample digested) to determine serotonin genotypes. The ladder (well 1) is on the far right of the attached image). I would greatly appreciate any advice on how to enhance band brightness and definition, thanks.
Additional information: 5 uL ladder added, 10 uL PCR product per well, PH of the buffer is correct. Temperature of the room ~75F with Gel container NOT on ice.
Hi Friends,
In some fluent simulation probelm, after run the iteration, iteration is moving for certain iteration(in my case 88th iteration) and then stops but loading , why it happened. Even if i stop, it is in same and not able to close. i could close through atrl+alt+del. Why ? anybody face this problem..?
I am trying to run a method from a column that is 250x4.6 mm, 8 um particle size into a column that is 300x7.8 mm, 9 um particle size. The makeup of the column is pretty much the same. They are each run as isocratic methods. How much more should I load or how should I adjust other parameters to comensate?
Please is there anyone who has a tutorial video on metaboanalyst 6.0 or who can assist me to run it on my LCMS/GCMS Data for untargetted metabolomics studies on soil and plants? We can collaborate on publishing together.
I am developing a competitive lateral flow device for the determination of antibiotics in milk. I am struggling with uneven staining of the test line. The test I developed has 4 test lines and I have a problem with only the third test line. I tried to change the concentration, pH, detergents, alcohols, and sugars in the test-line dispensing buffer and no improvement was achieved. The other test lines and control lines don't have such a problem. The protein I dispense on the third test line is BSA-chloramphenicol conjugate.
Could you advise on the problem?
As a sample, I use milk that doesn't contain any antibiotics.
I use liquid gold conjugate for testing.
I attached the photo of the same sample run in multiplicate (23 total runs) at the same time.
I am running simulations with oxdna using advanced techniques of sampling as Umbrella Sampling.
I am running SDS-Page western blot using 10% acrylamide gels. However, my samples are not migrating more than 55 kDa. The bands are not defined. I am using 4x Laemmli buffer with LDS from Biorad. The cell lysates are human whole brain lysates. I am wondering if the LDS has something to do with this? I tried to boil the samples at 95 degrees for 5 min; heat at 70 degrees for 10 min, all did not work.
Hi everyone,
I hope you are all well.
I currently work on ATD-GC-MS for running VDA278 standard, but I experience the error message "Extended Trap Des. Equilibrium" on my ATD panel. As the error message occurs, we would not have peaks in my chromatography (just like a blank test). After I checked it, the problem is on the column flow rate could be unstable before trap heating.
The vedio I take for column flow unstable: https://www.dropbox.com/s/mj5efmq3rboaki0/2023-4-17%2005%2028.mov?dl=0
Nowadays, I run n-alkanes analysis with Tenax TA sorbent tube (methanol be the solvent, liquid solution directly spiking 1¬2µL on Tenax TA tube). Trap is packed by Tenax TA as well. The most tricky portion is this error always occur around two weeks after PE engineer maintenance. (While engineer here, the machine is running well. However, after running some tubes, the error occurs unpredictably) The system no leak be detected, air-water looks really nice.
So I'm wondering if anyone have the same issues before and willing to share your experience for trap desorption equilibrium extended.
These parameters for which I'm running now:
350ATD
Temperature(C)
Tube: 280
Transfer Line: 280
Valve: 280
Trap Low: -30
Trap High: 280
Trap Rate (C/s): 99
Times (min)
Tube Desorb: 20
Trap Hold: 20
Trap Desorb (Desorb2): 1
Purge: 1
GC Cycle: 85
Pneumatics (mL/m)
Inlet Split: 44
Outlet Split: 19
Tube Desorb: 40
Column: 2
Col/Trap Desorb: 2
GC Column: HP-ULTRA 2 50m, 0.32mm(Diam), 0.52 µm(Film)
GC Temperature: 40C for 2min, 3C/min to 92C, 5C/min to 160C, 10C/min to 280C holding 10min
I am currently using gmxMMPBSA tool for MMPBSA analysis. I have a 100ns gromacs trajectory for protein-ligand complex which contains 10000 frames. Even if I run the MMPBSA analysis with interval of 5, it takes one day to complete analyzing complex contribution alone. Please suggest me a solution to get result in half day.
Between can I take the last 10ns of the trajectory and apply interval of 5 for analysis. will it give be good result in small duration.
Hello:
I work in Abaqus 2017 a lot. I have noticed that when I submit any abaqus job (that does not contain any subroutine!) via parallel processing (aka cpus=4 or 8), it runs just fine!
But, when i submit an abaqus job that contains a user subroutine with it via parallel processing, it won't run at all. It will not abort, or terminate, or do anything. The job will freeze.
The only way to run an abaqus job that has a user subroutine with it, is to run the job with single processing. This takes up a lot of time.
I do use the allocatable arrays for the job, and define them in my subroutine. Is there something I should include in my subroutine?
Could you please tell me why is it that the abaqus jobs (with user subroutines) get completely stuck with cpus of 2 or higher? Please?
Thank you very much!!!
I'm using covariance-based SEM software. Do I need to normalize the data before I run the SEM model? The SEM model is fine and the data is large (744).
1. Do I need to normalize the data before I run SEM
2. Would the central limit theorem not apply and so I need not normalize the data
3. Normalizing would change the basic characteristics.Would the findings still be valid
After staining and solidifying my agarose gel, I load the first well of the dried agarose gel with the TrackIt Ladder (10488058) from Invitrogen and load my unstained DNA samples into the other wells. I fill the electrophoresis device on top of the agarose gel without covering it and run the electrophoresis at 35 V (5 min) and 50 V (5 min). Then, I cover the gel with TBE 7 mm above and run the electrophoresis at 65 V (60 min).
When I take the photo with Trans UV BioRad, the ladder bands are distinct and stain well, but the DNA samples are not.
I need help with this, I have tried it with other Invitrogen ladders and without them with the same procedure and the DNA bands are visible, except on this TrackIt Invitrogen ladder.
Flash Flooding in UAE: Could have been avoided?
Risk of flash flooding in UAE very well known.
Changes in land use arising from enhanced urbanization over last few decades. And as a result, hydrological consequences associated with flood risk needs to be faced, particularly, shorter times to peak discharges.
Still, under normal circumstances, 147 mm in a day is not at all a big spell to be concerned about. However, UAE has altogether has a peculiar hydrological system.
1. UAE does not have rivers flowing downhill.
2. UAE’s flood terrain remains characterized not by conventional water, but by wind (which is no more stable under the flow of water).
Flood water has to flow along the base of the dune formations, dictated by wind direction. And, these dunes in UAE vary over orders of magnitude (some running for few tens of meters, while others running for tens of kilometers)
3. A highly variable infiltration rates across the entire catchment.
4. Establishing connectivity between overflowing upstream ponds and subsurface flow paths remains a basic problem.
5. Scouring and erosion dictating the variations in the course of surface water flows.
6. Absence of natural river mouths discharging floodwaters into the Sea.
I have used the DLS machine (Zetasizer Software) to measure the size of micelle prepared. For the first 2 runs the quality report shows "Good", while the 3rd measurement shows as "Too Polydisperse" for thee same sample; the Z-avg value(26.48,26.64,27.93) and PDI value(0.558,0.563,0.502) are as stated. (The 3 runs are automatically done by the machine.)
I am studying leadership style's impact on job satisfaction. in the data collection instrument, there are 13 questions on leadership style divided into a couple of leadership styles. on the other hand, there are only four questions for job satisfaction. how do i run correlational tests on these variables? What values do i select to analyze in Excel?
I'm running MD calculations in VASP for pi-stacked dimer (triphenylene) and I constrained the internal coordinates of individual monomers using ICONST file.
The calculations always stop after one ionic step due to
Error: SHAKE algorithm did not converge!
Error too large, I have to terminate this calculation!
Can anyone give me guidance on which parameters I can change to minimize this error.
the error file is attached for your reference.
Thank you in advance!
Hi,
When there is only a single analyze step in my job, multi core processor can successfully be used to run it.
However, when the job have more than one analyzing step, multi core processor can seldom to be used successfully to run it. Always only the first step can be finished, after the first step, the subsequent step would not be proceeded, and the status always showed "running".
Is there anybody know the reason for it?
Thxs.
Hello all:
Any help appreciated. I am running an intervention with students in a esl environment. I want to compare knowledge and awareness of the topics pre and post intervention. The issue is the survey needs to be anonymous. I have no gaurantee that the same students will be taking part in both questionnaires (althought there will be significant overlap). I will not be running a placebo group.
Am I looking at independent t tests, or something different? Can I infer any change is due the intervention if its not a paired t-test?
Again, any help apprecited, if anyone could direct me!
I am running a RNA ligand simulation on gromacs , i encountered a problem in which the number of coordinates in coordinate file (EM.gro, 2171) does not match topology (topol.top, 30886) file. PLease help me with this problem.
On dftbephy code : How can we fix the problems for run python bands.py and python dftbephy-epc.py ?The problems on my system are ; bands.py line 92, dftbephy/calc.py line 176 and 253, dftbephy/epc.py line 83, dftbephy-epc.py line 138 and python/site-packages/scipy/linalg/decmpy.py line 578 there are errors which stop run. How can I resolve the issue or fix the problems as shown on screenshot file below.
Can anyone assist with running the Linkage Analysis Tool 'Easylinkage' or any alternative tool for conducting linkage analysis and calculating LOD scores?
Opening Islamic banking by conventional banks fact.
Does anyone know what I have done wrong = error message PROCESS v 4.2 moderation analysis with categorical moderator... 'one of the categories contains only a single case?'
The moderation variable is categorical with 4 values... I have run this analysis before and it worked perfectly... where have I made my mistake please?
With regards to the journal by Kasarla & Pathak in 2023 entitled Tenets of Specimen Management in Diagnostic Microbiology, is there any significant effects of specimen management in microbiology to the overall results of laboratory tests? Why is it important to ensure the quality and state of the specimen before running it in the lab?
I am facing some problems while running a project in TRNSYS.
Previously, I created the project in TRNSYS 17 and was running fine.
But when i try to run the same project on TRNSYS 18 i get the following warnings,
1) "A duplicate of TYPE508 was found in "TESS_HVAC_v17.2.01_64.dll""
2) Wrong version of gentrn.dll was used. TYPE 56 expects version 255.000, but TRNBuild used gentrn.dll version 257.000was loaded.
After I got this error, I thought the problem was with the version (as installed TRNSYS 17 is 32 bit and TRNSYS 18 is 64 bit) so i recreated the project on TRNSYS 18 and ultimately when i try to run the project i get the similar warning message in the recreated project on TRNSYS 18.
It would be great if i could know what the problem is and how to solve it.
Hello I am trying to find a website or freeware that predicts amphipathic helices from AA sequence.
I have been trying AMPHIPASEEK (AMPHIPATHIC IN-PLANE MEMBRANE ANCHORS PREDICTION),
It does not seem to be running.
Also any tips as to submission (small sequence of helix Vs whole protein, etc) will be appreciated
Thanks everybody, Neal
I have a trivariate Panel VAR system with the following variables: LnGDP (Natural log of GDP), Fiscal Expenditure (as % of GDP) and Interest rate ( in %). LnGDP and interest rate are stationary at levels but fiscal expenditure is not. The first difference of the Fiscal Expenditure is stationary. Note that while checking stationarity of fiscal expenditure, when I include drift, the panel is stationary.
My first question is if I run a Panel VAR model based on the above information, what can be the future consequences?
My second question is provided the above regression is non-spurious, how do we interpret the coefficients in the model?
Thank You
Every so often, someone presents work that is derivative of mine (e.g., Lightspeed Expanding Hyperspherical Universe or LEHU).
That particular hypothesis or postulate is impossible to defend. Richard Feynman proposed the hyperspherical topology in a lecture, but he couldn't defend it because of the problems it creates.
My theory - The Hypergeometrical Universe Theory (HU) solves those problems.
It emboldened many copycats who believed they could jump in the wagon and take part in the HU's model (for example, LEHU) and run.
By the way, I should clarify that LEHU is part of HU's model, not because I was the first to propose it, but because I am the only one who can defend it.
That creates flawed models where the flaws are obvious.
Here, I present a critique of one of these models. Feel free to disagree or agree in the comments.
Hello
I am new to CRISPR-Cas9 and, for my project, I started to collaborate with another lab that claims expertise on this technique. The objective is to produce 3 cell lines, each with a knockout for a different gene. They did all the process of cloning the plasmid until viral transfection, where a point mutation was induced and an antibiotic resistence gene was added and then selected for.
From what I understand, all cells they gave me are resistant to antibiotic but not all cells necessarily have the mutation or the same mutation, and the solution for that, as far as I know, would be single cloning. However, they said single cloning would not be necessary and that all I had to do was run a western blot for the targets in this heterogeneous cell pool to be sure the knockouts worked. They said if the band was weaker or had a different size it would prove it worked. They also said I could already run phenotypic experiments and it would show it worked as well, but, according to them, the definitive proof is the western blot.
Because I am completely new to this, I just wanted to know if such decisions make sense. I just find it a bit weird because I would have to do single cloning anyway in the end. It wouldnt make sense to me to ever try to check the phenotype of a heterogenous cell pool, if I don't even know the mutation rate of the cells, and I assume such rates might even change every time I plate those cells. Moreover, I think that there might be the possibility the protein could maintain the same size even if it became dysfunctional with the knockout.
I found those decisions weird, but I don't have enough experience with the technique to have a solid opinion. Any thoughts?
Hello everyone,
I am trying to run Rietveld refinement on my sample which has two phases. The primary phase is quite dominant with most of the peaks, but there is only one peak of the secondary phase. I read somewhere that Rietveld refinement is not possible/will not yield good results if I run refinement on a pattern with les number of peaks. Can anyone please clarify whether I can run the refinement on a pattern with just one secondary peak .
My scan is from 10 to 70 degrees with a step size of 0.0168.
What is a super vacuum? Is the earth in a vacuum? And what is dark energy?
It has not been proven until today and nature has always applied and proven exceptions and violations in the accepted theories many times in the past. That these were merely human formalisms and experimental artifacts and exploiting the limits of technology, and physical limits and laws are constantly being broken and bent in nature. Hereby we will attempt to show theoretically why and how there is and experimentally evidence in our universe of vacuum space, either in its theoretically idealized absolute form, thus free space or the partial vacuum that characterizes the vacuum of QED or QCD. And its zero-point energy and oscillations may actually be the greatest proof in nature for super energy.
It is possible without violating causation. that the apparent effect of "nothing" of vacuum space may be evidence for it
superluminocity and all this time it was hidden right in front of us. We are here trying to answer a fundamental question of physics, why the vacuum is basically space to us looks like nothing on the assumption that "nothing" exists in nature, and why a hypothetical superluminous vibration, a particle the size of Planck creates apparent nothingness in our spacetime. The novelty of the research here infers that free space is dark energy and that superluminous energy.
Stam Nicolis added a reply:
(1) Depends what is meant by ``super vacuum''. The words must, first, be defined, before questions can be asked. As it stands, it doesn't mean anything.
(2) To a good approximation the earth is moving around the Sun in a vacuum, i.e. its motion can be described by Newtonian mechanics, where the only bodies are the Earth and the Sun and the force between them is Newton's force of gravitation.
(3) Dark energy is the property of space and time that describes the fact that the Universe isn't, simply, expanding, but that this expansion is accelerating. To detect its effects it's necessary to measure the motion of bodies outside our galaxy.
To understand all this it's necessary to study classical mechanics-that leads to understanding the answer to the second question-and general relativity-in order to understand the answer to the third
László Attila Horváth added a reply:
Dear Abbas Kashani ,
The graviton - which creates or capture elementary X-rays and gamma rays- , by itself, it can be considered almost like a super vacuum.
Sergey Shevchenko added a reply:
What are rather numerous, and really strange, “vacuums” in mainstream physics, and what are two real vacuums is explained in the Shevchenko-Tokarevsky’s Planck scale informational physical model , 3 main papers are
https://www.researchgate.net/publication/367397025_The_Informational_Physical_Model_and_Fundamental_Problems_in_Physicssection 6. “Mediation of the fundamental forces in complex systems”
The first vacuum is the Matter’s fundamentally absolute, fundamentally flat, fundamentally continuous, and fundamentally “Cartesian”, (at least) [4+4+1]4D spacetime with metrics (at least) (cτ,X,Y,Z, g,w,e,s,ct), which is the actualization of the Logos set elements “Space” and “Time” [what are “Logos” set, “Space” and “Time” see first pages in 1-st or 2-nd links] at creation and existence of a concrete informational system “Matter”,
- i.e. this vacuum is a logical possibility for/of Matter’s existence and evolving, and so is by definition nothing else than some fundamentally “empty container” , i.e. is “real/absolute” vacuum.
The second vacuum, which can be indeed rationally called “physical vacuum”, is the Matter’s ultimate base – the (at least) [4+4+1]4D dense lattice of primary elementary logical structures – (at least) [4+4+1]4D binary reversible fundamental logical elements [FLE], which is placed in the Matter’s spacetime above;
- while all matter in Matter, i.e. all particles, fields, stars, galaxies, etc., are only disturbances in the lattice, that were/are created at impacts on some the lattice’s FLE. At that it looks as rather rational scientifically to assume, that such vacuum really existed – that was the initial version of the lattice that was created/formed at the “inflation epoch”, more see the SS&VT initial cosmological model in section “Cosmology” in 2-nd link.
After this initial lattice version was created, in the lattice a huge portion of energy was pumped uniformly globally [and non-uniformly locally], what resulted in Matter’s “matter” creation, which we observe now.
Since all disturbances always and constantly move in the lattice with 4D speeds of light, now can be only some “local physical vacuums”, etc.;
- though that is really quite inessential – the notion “physical vacuum” is completely useless and even wrong, since the really scientifically defined FLE lattice is completely enough at description n and analysis of everything that exists and happens in Matter. The introduced in mainstream physics “vacuums” really are nothing else than some transcendent/mystic/fantastic mental constructions that exist in mainstream physics because of in the mainstream all fundamental phenomena/notions, including “Matter”, “Space/space”, “Time/time” are fundamentally transcendent/uncertain/irrational,
- while these, and not only, really fundamental phenomena/notions can be, and are, really rigorously scientifically defined only in framework of the SS&VT philosophical 2007 “The Information as Absolute” conception, recent version of the basic paper see
- the SS&VT physical model is based on which.
More see the links above, a couple of SS posts in
https://www.researchgate.net/post/What_is_the_concept_of_quantized_vacuum_And_what_is_the_role_of_gravity_in_nature_And_what_is_the_relationship_between_dark_energy_and_quantum_gravi are relevant in this case also.
Abderrahman el Boukili added a reply:
Super vacuum, in my view, is just the vacuum itself, that is, the channel through which the universe of particles and anti-particles intersects.
Courtney Seligman added a reply:
For all practical purposes, the Earth is moving through a vacuum as it orbits the Sun, as there is so little of anything in any given place that only the most sensitive instruments could tell that there was anything there. But there are microscopic pieces of stuff that used to be inside asteroids or comets, and pieces of atoms blown out of the Sun as the Solar Wind, and cosmic rays that manage to get through the Sun's "heliosphere" and run into anything that happens to be in their way. So though the essentially empty space around the Earth would qualify as a vacuum by any historical standard, it isn't an absolutely perfect vacuum. And I suppose a "super vacuum" would be a region where there isn't anything at all, including not only matter, but also any form of energy (which has a mass equivalence of sorts, per Einstein's Special Theory of Relativity); and if so, then "super vacuums" do not exist.
Harri Shore added a reply:
The concepts you're exploring—super vacuum, dark energy, and the nature of the vacuum in quantum electrodynamics (QED) and quantum chromodynamics (QCD)—touch on some of the most profound and speculative areas in modern physics. Let's break down these concepts to provide clarity and context for your inquiry.
Super Vacuum
The term "super vacuum" is not widely used in mainstream physics literature but could be interpreted to mean an idealized vacuum state that is more "empty" than what is typically considered achievable, even beyond the vacuum state described by quantum field theories. In standard quantum field theories, a vacuum is not truly empty but seethes with virtual particles and fluctuates due to quantum uncertainties, known as zero-point energy.
Is the Earth in a Vacuum?
The Earth is not in a vacuum but is surrounded by its atmosphere, a thin layer of gases that envelops the planet. However, outer space, which begins just beyond the Earth's atmosphere, is often described as a vacuum. This is because outer space contains far fewer particles than the Earth's atmosphere, making it a near-vacuum by comparison. It's important to note that even the vacuum of outer space is not completely empty but contains low densities of particles, electromagnetic fields, and cosmic radiation.
Dark Energy
Dark energy is a hypothetical form of energy that permeates all of space and tends to accelerate the expansion of the universe. It is one of the greatest mysteries in modern cosmology, making up about 68% of the universe's total energy content according to current observations. The exact nature of dark energy is still unknown, but it is thought to be responsible for the observed acceleration in the expansion rate of the universe since its discovery in the late 1990s through observations of distant supernovae.
Vacuum Energy and Superluminosity
Vacuum energy refers to the energy that exists in space due to fluctuations of the quantum fields, even in the absence of any particles or radiation. It is a manifestation of the Heisenberg uncertainty principle in quantum mechanics, which allows for the temporary creation of particle-antiparticle pairs from "nothing."
The concept of superluminosity or superluminal phenomena (faster-than-light phenomena) is speculative and not supported by current mainstream physics, as it would violate the principle of causality, a cornerstone of the theory of relativity. However, there have been theoretical explorations of conditions under which apparent superluminal effects could occur without violating causality, such as in the context of quantum tunneling or warp drives in general relativity.
Vacuum Space as Evidence of Superluminous Energy
Your hypothesis suggests that vacuum space or "nothingness" might be evidence of a superluminous energy or vibration at the Planck scale that creates the apparent emptiness of space. This is a speculative notion that would require new theoretical frameworks beyond the standard model of particle physics and general relativity. It also implies that dark energy, the force behind the universe's accelerated expansion, could be related to this superluminous vacuum energy.
While current physical theories and experimental evidence do not support the existence of superluminous phenomena or energies, the history of science shows that our understanding of the universe is constantly evolving. Theoretical proposals that challenge existing paradigms are valuable for pushing the boundaries of our knowledge and prompting new avenues of experimental and theoretical investigation. However, any new theory that proposes mechanisms beyond established physics must be rigorously tested and validated against empirical evidence.
It is my first time to deal with Label free proteomics data. The data was generated from Mayfly, it doesn't have annotated proteins. I used "uniprot_sprot.fasta" as a reference sequence, which has about 500k protein entry. The goal is to identify the protein and do differential analysis downstream. I used standard LFQ setting in maxquant and found only 85 entry in proteinGroups.txt file which is too small for the whole proteins in mayfly. When I run with a smaller set protein we have generated from Mayfly ( which partial protein) as a reference sequence, I got 330 entry in proteinGroups.txt. I expected a higher number of identified protein group when I use swiss sprot reference sequence. Any suggestion what might went wrong?
I am running a bacterial culture and reading the bacterial density on a spectrophotometer at 600 nm. I blanked with the culture medium and also ran a tube without inoculation as a control.
After some time, the inoculated samples gives me 0.341 but the control (medium alone, as the blank, but kept in the thermostat for the same period as the bacterial culture) gives me 0.022.
What is the threshold in OD600 units to say that the control is not contaminated?
Thank you
Dear all
I am trying to run RF DIFFUSION tool locally. While running the script inference.py I am getting error "NVTX functions not installed. Are you sure you have CUDA build? "
I have latest version of CUDA with NIVIDIA GPU. while installing the software I have done Conda Install SE3-Transformer installation.
Any suggestion reg running this software locally will be highly appreciable. Thanks in advance !
I have just started using Bader Charge.
As per the instructions, I first unpacked all the required files (for instance, Bader binary files, source code files, and chgsum.pl script). I ran electronic SCF calculation by adding these tags to my INCAR file:
LAECHG = .TRUE.
NSW=0
LWAVE = .FALSE.
LCHARG = .TRUE.
After it generates AECCAR0, AECCAR1, and AECCAR2 files. My system has 192 atoms. So, by using the chgsum.pl script to generate the CHGCAR_sum file, I received this message:
```
Atoms in file1: 0, Atoms in file2: 0
Points in file1: 7.1966367651e-06, Points in file2: 7.1966367651e-06
```
Which doesn't make sense to me, since I checked all my files along with the CHGCAR files, they have correct dimensions and the file is complete. Then I tried to use Bader:
```
./bader CHGCAR -ref CHGCAR_sum
```
I received this message:
```
GRID BASED BADER ANALYSIS (Version 1.05 08/19/23)
OPEN ... CHGCAR
VASP5-STYLE INPUT FILE
DENSITY-GRID: 160 x 160 x 160
CLOSE ... CHGCAR
RUN TIME: 0.87 SECONDS
OPEN ... CHGCAR_sum
VASP5-STYLE INPUT FILE
forrtl: severe (24): end-of-file during read, unit 100, file
/home/tyadav/54209/CHGCAR_sum
Image PC Routine Line Source
bader 000000000048A056 Unknown Unknown Unknown
bader 000000000040EC7F Unknown Unknown Unknown
bader 0000000000412CA6 Unknown Unknown Unknown
bader 0000000000416CA9 Unknown Unknown Unknown
bader 0000000000401F08 Unknown Unknown Unknown
bader 000000000040187D Unknown Unknown Unknown
bader 0000000000514D41 Unknown Unknown Unknown
bader 000000000040175E Unknown Unknown Unknown
```
Can someone possibly help me with this issue? I really appreciate any possible suggestions.
Dear colleagues,
I've been running Blast2GO for approximately two weeks when suddenly my computer froze, requiring a restart. Unfortunately, after restarting the PC, I was unable to recover the data from Blast2GO. I'm seeking advice on how to recover the lost data, or if recovery isn't possible, how to prevent this from happening again in the future.
Thank you for your assistance.
Hi,
I am a graduate student using a Shimadzu HPLC with a PDA detector.
This instrument was newly installed. Only the tech at installation and myself have run samples.
I am using it for research on cannabinoids specifically CBD and THC and only running diluted standards right now to modify the method.
I am using water (0.1% formic acid) as mobile phase A, and acetonitrile (0.1% formic acid) as mobile phase b. The total run time is 10 min. I have a gradient that starts at 70% B for 3 min, then ramps up to 90% B for 2 min, hold for 1 min, then back down to 70% and hold for the rest of the time. The flow rate is 0.2 mL/min. My injection volume is 1 uL.
The column is a Shimadzu C18-120, 3 um 3.0 x 50 mm.
The PDA detector wavelength is from 190 nm to 600 nm and specific wavelengths are 210 nm and 220 nm.
During my first run I ran a CBD sample (CBD 50 ng/mL in acetonitrile). A mistake was made of not running a blank before the CBD sample :(
I ran several blanks after and still continue to see peaks.
I have remade the blanks many times in different vials, new solutions of blank.
I set up a run consisting of 75 blanks on a 2 minute "cleanout method" run where I was using 90% ACN for the full time. All the blanks in this batch had the same peak with a consistent intensity. (The intensity of the peak did not really decrease over the 75 injections).
I have also run Null injections and have gotten the same peak in those injections as well.
I switched mobile phases from ACN to methanol. When running the blanks with the methanol I still got the same peak. After reversing the column, running methanol through, then fixing the column I still got the same peak again.
First we had thought it was CBD that contaminated the instrument, but after switching the phases and running so many blanks we are not sure if this is something with the instrument that we should try to change/clean?
At this point I am not sure where this peak could be coming from and was looking for some advice/direction on what to try next!
I can provide additional information as needed!
Hi, I am running blind docking and its my first time using Autodock. I have been following a tutorial and going step by step but I have now come across this issue and I cannot seem to understand why it keeps recurring. If anyone could help me figure this out that would be very much appreciated!
Thank you in advance.
I have a problem with running my logistic regression. When I run my analysis, I get really strange values and I cannot find anywhere how I can fix it. I already changed my reference category and that led to less strange values but they are still there. Also, this only happens to two of my eight predictors. These two predictors have multiple levels/categories.
Can someone explain to me what's wrong and how I can fix it?
We have recently aquired plasmids with the following configuration:
EF1A>{gene of interest}:P2A:Bsd
When used for transfection (293FT cells and SH-SY5Y cells), these cells expressed the desired protein after 48 hrs (verified several times by western blot and viewing of GFP which is encoded in some of our plasmids). When the selection antibiotic is added, most cells survive, which is expected. However, after a few days, the cells no longer produce the desired protein (verified many times by western blot and viewing GFP).
To be sure, we always use a negative control for the antibiotics, cells which were not transfected, and they all died quickly (36 hrs at most).
Oddly enough, when used for lentiviral infection, there is no issue, and the cells continue expressing the protein even after a few weeks of antibiotic selection.
We have not run into this problem with other vectors acquired from other sources.
Thanks in advance
The landing process of the AFM (NT-MDT) we use does not work. Although the computer program appears to be running, but it does not come close to your fingertips. We cannot observe any mechanical movement. Be more than happy to help with this.
kindly sir guide me about changing the cutt off energy.k-points or anyother value.please answer me i will be thankful to you.
The issue with running the Rescale a Grid Solver Geometry File example on the DAMASK official website, Can not run on ' scaled = grid.scale(cells)' ,See Figure 1 below,
Error:output shape not correct
My background is primarily qualitative so I'm struggling to understand the best way to approach analysing my research. I'm conducting a survey with multiple IVs that will be split into four blocks, but also have multiple DVs (use of various related services) and plan to use Hierarchical Multiple Regression. The first analysis will just be use/non-use of any of the services so HMR would be fine. But how do I compare use of different services? Would I have to run individual HMR analysis for each DV or is there a better way to do this?
I am studying the influence of 7 variables on a DV. 7 questionnaires has been adapted and one for the DV has to be established. So is it okay for me to run a CFA on all the questionnaires? or only EFA on the newly established scale and CFA for the rest? Ive read that EFA is only for newly established scale or do i proceed with actual data collection and then run CFA for the entire scales?
Hi all,
I want to run isolated molecules as one unit on gromacs. any idea how would i do that.
I have a protocol that requires me to use a centrifugation speed of 44,000 x g for 70 mins to generate cytoplasts from a human cell. However, I only have access to a ultracentrifuge rotor that goes up 42,200 x g.
Would it be possible for me to just run this ultracentrifuge at 42,200 x g for 70 mins or slightly longer to generate my cytoplasts?
Hello,
I am running a UMAT and need to save about 300 SDV's as history output to plot them. But in the odb file I only get SDV1 to SDV100. I do not get any warning or error messages. I even tried to name all 300 SDV's individually in the inp file, but didn't work either. Anyone has faced the same problem or know how to address it?
Thanks
Hi everyone. I am actually very new to protein computational so I have zero idea how do I install AlphaFold 2 to be run on my computer.
I heard that if you install AlphaFold + all required dependencies from source code, it would be significantly faster to run compared when you using a (pre-built) container.
I don't really have any idea on how those two works. If anyone could help me with this, I would be really grateful.
Does any of you know how to do this?
hello,
I would like to add certain faces to a group. The process will run automatically, so i write a script.
But it doesn't work, because Space Claim names the faces randomly.
For example, this function adds the wanted faces in a group "groupx1.append (GetRootPart (). Bodies [j + 2] .Faces [3]).
However, it does not always work. See Appendix.
Can anyone help me?
details:
MIPS USB Cameras provide a quick and easy means of displaying and capturing high-quality video and images on any USB 2.0-equipped desktop or laptop computer running a supported Microsoft® OS.
please send me.
thanks
Karthick
I want to run a meta-analysis for linkage mapping and GWAS QTLs and I will require a software to be able to achieve this.
I am attempting to perform an EMSA with a transcription factor and its wildtype binding sequence but the first attempt showed that the protein never left the well. After some research, I have discovered that the theoretical pI of the protein is approximately 8.8 and my running buffer is 8.3.
What is the best way to run an EMSA for this protein? I am worried that if I change the loading buffer pH that the protein:DNA binding might be affected. Can I just adjust the pH of the running buffer to 1 point above the protein's pI (e.g. 9.8) with NaOH? Do I need to adjust the pH of the gel as well?
I'm running RNA extracted from Saccharomyces cerevisiae in a Tapestation 4200. The RINe value is excellent, but the sizes of the 18S/28S bands are lower than expected: ~1000 and ~1800 instead of ~2000 and ~4000, respectively. The internal control (lower marker, 25nt) in each sample ran as expected. Yeast don't seem to have a hidden break in the rRNA.
Has anyone experienced a similar problem?
Hello all,
I had a question about MD, which I would appreciate if you could answer. I want to run MD without solvation. please suggest any idea.
I've been learning ultracapacitors recently and am particularly curious about the short-circuit behavior of these cells. I'm aware that it's not a recommended procedure, but my intention is to conduct it at very low voltages, for instance, at 0.1V. Due to current lab limitations, I'm unable to run experiments at the moment, but the specific cell I'm studying is a Maxwell 325F capacitor, featuring approximately 1.8mOhm ESR for a fresh cell.
I have a few questions:
- Is it considered safe to perform a short-circuit test at 0.1V?
- How long would it take for the cell to stabilize around 0V or <1mV when the circuit is shorted?
- What can be expected in terms of the cell surface temperature during this test?
- When the short-circuit wire is removed, will the cell voltage jump back?
I would greatly appreciate any feedback or insights you can provide.
Dear All,
Could you tell me how to run MD simulation by any software. I have DS, VMD in hand. But I try to run 30 nsec in DS that spent 6 days, is that right? The reviewer asked me to perform this procedure in 100 nsec, however, my computer can not complete this process. Could you let me know how set up all parameters and settings to finish them. Many thanks for your help with my deep heart.
I am doing LC-MS (LC-QToF) of a phosphonate compound (nitrilotris methylene phosphonic acid), for which I add derivatization agent trimethylsilyl diazomethane to each sample and wait 2hours before running them on the instrument. I do not have any mass label internal standard for the compound, so I am using caffeine as the internal standard. I am using C18 column. In the beginning I was getting disturbed peaks with a tail from the start of the runs, then for two weeks I got clean peaks. However, I started getting the same type of disturbed peaks again. What could cause the disturbed peaks in LCMS? I have attached a screenshot of the disturbed peak I am getting. My internal standard peak also looks like this.
I am trying to run coupled flow-deformation analysis for slope subjected to rainfall infiltration with multiple time duration (i.e 1h, 3h, 6h, 12h, 24h & 36h).
For initial steps, I put maximum number of steps as 10000 to complete analysis, but after completing 3 stages, it gives me error that "Prescribed Ultimate time not reached". When I try to increase number of steps more than 10000, software doesn't allow me to do this.
Is there any Solution for this????
Could artificial run own simulation by assuming initial condition as per convenient way
I'm trying to run a model in AMOS and I get this error but I don't know what I can do about it.
Hi,
I am running proteomic analysis on Salmonella, however (for saving resources sake) I would like to know if I can run the analysis on a bacterial cocktail of three strains of the same serovar.
I started running SWAT-CUP with 100 simulations but the error message shows like in the attached screenshot that the output files does not exist in the directory path even after the calibration was run successfully.
I think SPSS made my computer slow. Be careful when run in your computer.
I am trying to distinguish between 393 and 374 band lengths by RFLP method. I am running in 3.5% agarose gel in 25x15cm gel electrophoresis to visualize the enzyme cut. I could not distinguish the bands with 60 minutes of running at 120 volts. When I ran for another 25 minutes, the bands faded, but the marker was visible. If I run it directly for 85 minutes, no bands including marker are visible in the gel. I also tried post staining method with ethidium bromide, but it didn't work. Any suggestions? Thank you...
I need to calculation the single point of a species, and I submitted a series of tasks, most of them worked, but a few of them failed. and the error code was like:
(Everything was normal until this step)
Localizing the valence orbitals
[file orca_mdci/mdci_util.cpp, line 960]: Error (ORCA_MDCI): Cannot open GBW file: //w0/tmp/slurm_xxxx.xxxx/xxxx/example15.loc
ORCA finished by error termination in MDCI
Calling Command: /cvmfs/restricted.hpc.rwth.de/Linux/RH8/x86_64/ISV/ORCA/5.0.4-gompi-2022a/bin/orca_mdci //w0/tmp/slurm_brxxxx.xxxx/xxxx/example15.mdciinp.tmp
[file orca_tools/qcmsg.cpp, line 465]:
.... aborting the run
i have tried some solutions found on this website, such as submitting with a single core, adjusting memory, allowing RHF, and switching to SOSCF when SCF doesn't converge, but none have been effective. so now i am hoping for assistance from experienced users and express my sincere gratitude.
(Additionally, they inquire whether there might be some unstable or 'wired' species preventing the calculation of sp or other computations. Hence, regardless of the adjustments made, no results are obtained in any situation.? 📷
When the wifi is on the job in abaqus is aborted
For my project I have created an intervention that involves educating patients about a topic. I make them do a survey before, put up my intervention, then do the same survey after. I want to compare the before and after scores to see if my intervention made a significant difference. My problem is that, while my intervention has been running on the ward, most but not all of the patients have changed. What statistical test can I use?
(This is the first time I am using a statistical test on my own data, sorry if this is silly/ obvious)
I am running a survey regarding metro-area network architecture and have some analytics to share through this link: https://docs.google.com/forms/d/1kJnEjukNDGC4JARuhgBI0HUUrQ8UNA-G71Aa6heaK8U/viewanalytics
The link will be active today only, until 8 pm CET.
Cheers,
Etienne
hi
I start working on fog network for this purpose I want to use iFogSim simulation toolkit. but I am facing a lot of errors while I am running this code on eclipse. anyone who help me to figure this out?
thanks
Hello RG Community:), towards the above topic:
I had entered all of the appropriate files within the ffTK Opt. Charge Tab, however,
am generating the below errors upon Run Optimization (just the first error is included
to facilitate):
Atom name: C1 not found in molecule
Attached is the INPUT PSF and PDB Files and QM Target Data first output file: output14C+H-ACC-C1.out
I believe the molecule is referencing the pdb file which indicates the C1 atom.
Please let me know if you would need to inspect the INPUT par files.
Thanks if you know:),
Joel 🚀
I am running a protein ligand complex simulation using Gromacs 2021 on a Windows Subsystem for Linux. My laptop has Nvidia GeForce RTX 3050 GPU. When i run the simulation of Lysozyme tutorial (as given in GROMACS tutorial) for 100ns, the expected finish time is showing approximately 1 week. I looked at the topology file to get an understanding of system size and found that my total system size is approximately 47500 including Solvent, ions, protein and ligand.
1) The "dt" defined in the mdp file is 2 fs and the number of steps (nsteps) is 50000000. I wanted to know if there is a way to speed up the process or is this the natural computation time that RTX 3050 provides? I went through other queries about the same issue and also I have worked with RTX 3080 Ti which completes a 100 ns simulation in approximately 30 - 40 min. So I assume that since 3050 also belongs to a similar class/family as that of 3080 Ti, it should atleast provide a better simulation timing (say 100 ns in 1-2 hours). I might be wrong about the technical aspect of GPU computation. Any help in this matter will be much appreciated.
2) Also, I wanted to know, since I am running these simulation in Windows Subsystem for Linux (WSL2), does that affect the computation speed of the GPU when MD Simulations are run using GROMACS?
I would appreciate if someone can help me out in this regard.
Thanks
Satyam
I am running a regression in eviews that uses one main x variable and includes AR and MA terms. When I run the Variance Inflation Factor test, high VIF values are produced the AR, MA, and lag terms as well as the main x variable. Are these high VIFs problematic? Although the AR, MA, and lag terms are listed with the x-variables, I don't think they should be treated like them when we talk about VIF. Wouldn't we expect high correlation between the ARMA/lag terms and the x variable? Isn't that the whole point of them? What am I missing here?
I am considering running an MD simulation of a protein with varying ligand concentrations. I think replica exchange molecular dynamics should be able to do that, but I have no idea how that is done.
Is it possible to get a tutorial or possibly other methods for running such a simulation?
This is a first run in our lab and there are several variables that each add several days to the protocol. I will be using the DeepLabel Antibody Staining Kit from LogosBio to label c-Fos. Questions are:
Which primary antibody do you use? At what concentration? How long do you incubate?
What concentration of secondary antibody do you use? How long is this incubation?
If anyone would share experience/protocol that would be great! We have been clearing and imaging brain and other peripheral tissues as well so if we can be of any assistance, please feel free to contact me.
I have been trying to figure out how to run GDC using Bio-logic software but still not able to figure it out. Can someone please point me in the right direction to run this experiment?
I'm doing several steps, such as minimization and equilibration, to start running MD simulations and I'm trying to automate this process by running one step after finishing the other. When I go between different steps, I need to provide a PDB file from the last frame of DCD. Is there any way to tell it to write this PDB file within the NAMD configuration file? I've been doing it manually by loading DCD then PSF in VMD and saving the last frame as PDB, which is not ideal for automation.
I am intresting in individual costings for education hygiene programs in pre-schools.
1. My research involved 10 explanatory variables. After performing the CIPS panel unit root test, I found 4 variables stationary at level 1, 3 variables stationary at level I (1), and 2 variables stationary at level I (2). What should I do next? Do I perform a cointegration test?
2. I run both the Westerlund cointegration and Pedroni cointegration tests in Stata and EViews, but Stata shows "No more than six covariates can be specified." and with Eviews I can't run the test with more than 7 variables. Then what should I do?
I have purified overexpressed protein from BL21 (DE3) cells using Ni-NTA column. When we run purified protein through native PAGE and SDS PAGE both which showed different result. In SDS page showed only one band of purified protein whereas two band in native PAGE. I have proceed whole experiment three times and found same results. What is the possibility to find two band in native page whereas it one in sds page. I have attached native PAGE image.
I am doing md simulation of HDAC11 protein with prospective ligands. I have completed molecular dynamics production run. But whenever I am doing MMPBSA.py assay, it is showing following error:
Loading and checking parameter files for compatibility...
cpptraj found! Using /home/mohon/anaconda3/envs/amber/bin/cpptraj
mmpbsa_py_energy found! Using /home/mohon/anaconda3/envs/amber/bin/mmpbsa_py_energy
Preparing trajectories for simulation...
100 frames were processed by cpptraj for use in calculation.
Running calculations on normal system...
Beginning GB calculations with /home/mohon/anaconda3/envs/amber/bin/mmpbsa_py_energy
calculating complex contribution...
calculating receptor contribution...
calculating ligand contribution...
Beginning PB calculations with /home/mohon/anaconda3/envs/amber/bin/mmpbsa_py_energy
calculating complex contribution...
File "/home/mohon/amber22/bin/MMPBSA.py", line 100, in <module>
app.run_mmpbsa()
File "/home/mohon/amber22/lib/python3.11/site-packages/MMPBSA_mods/main.py", line 224, in run_mmpbsa
self.calc_list.run(rank, self.stdout)
File "/home/mohon/amber22/lib/python3.11/site-packages/MMPBSA_mods/calculation.py", line 82, in run
calc.run(rank, stdout=stdout, stderr=stderr)
File "/home/mohon/amber22/lib/python3.11/site-packages/MMPBSA_mods/calculation.py", line 472, in run
error_list = [s.strip() for s in out.split('\n')
^^^^^^^^^^^^^^^
TypeError: a bytes-like object is required, not 'str'
Fatal Error!
All files have been retained for your error investigation:
You should begin by examining the output files of the first failed calculation.
Consult the "Temporary Files" subsection of the MMPBSA.py chapter in the
manual for file naming conventions.
My input file configuration is given bellow:
Input file for running PB and GB
&general
endframe=1000, verbose=2,
# entropy=1,
/
&gb
igb=2, saltcon=0.100
/
&pb
istrng=0.100,
/Can any one help me with it ?
Hi, I am trying to create a script for a particle that impact on a substrate. I want to run multiple simulations with varying particle sizes, that would run automatically. How would I go about accomplishing this?
Good day everyone,
I would like some help in getting online references to answer the question I have posed. I am dithering and running from pillar to post to find answers. Many thanks.
Regards
Vijaykumar
I am working a system of 40 atoms (Hafnium Selenide,monoclinic structure). When I try run the command "mpirun -np 8 VASP " , process terminates with this error. I have 8 GB RAM and I tried all possible combinations of NCORE and KPAR but nothing worked. What can I do now? This structure does not have any periodic atoms. Can I use ISYM=0 ? Will it help solving the issue ?
Hello,
So I have been struggling to get a successful western blot using MIN6 derived EVs, and it has been a real struggle.
Everytime I isolate my EVs, and after lysing them, I run the proteins in the gel and see not band at all. I use coomassie blue or the stainfree precast gels to check the run.
The ladder shows up fine though.
After lysing my EVs, I measure the protein amount using microBCA, and when I diluted the sample 1/2 I got a final concentration of around 200ug/ml, and when I diluted it 1/10 I got a final concentration of 400ug/ml. This is already weird but I still loaded my sample assuming I had 200ug/ml to be on the safe side, and I used 4x laemmli in order to avoid unecessary dilutions, and using my calculations I should have loaded around 15ug/ml. But the imaging showed no protein at all, and now I am really puzzled. (the first band is the ladder, and I am supposed to see two bands on the left side of it).
Brifely here are the steps i followed:
-Collect media from min6 cells (150ml)
-Centrifuge 500g/10min, collect supernatant, centrifuge 2000g for 20min, collect supernatant,ultracentrifuge 120000g for 90min(4C), keep the pellet and wash with pbs 150000g for 70min(4C).
-Finally I diluted the pellet in 500ul of PBS and store at -80C.
Lysis:
-Take 100ul of my ev sample, put it in a 10k column, centrifuge at 1400g/15min at 4C, add 500ul of 1XRIPA to my concentrate, spin 14000g/15min 4c. Put the column upside down in the tube and spin 2000g/2min. Add 70ul of RIPA and incubated on ice for 15min, then spin again 14000g/15min 4c and collect supernatant, and put on ice until further use.
Gel:
I use stain free anyKd precast gels with 50ul wells, and I use for the prep solution :4x leammli(900ul)+b-mercaptoethanol(100ul), because I am looking for tsg101 antibody. I then mix 1/8th of prep solution with 7/8th of my lysed sample. Heat up at 70C 10min, and load the sample in the well, run at 120v/1h.
I don't know what went wrong.
I trying once running the gel with unlysed EVs, and I got a faint band when I did coomassie blue, so maybe the lysis is wrong, or the initial amount of conditioned media is too low, as I saw some people starting with 1-2liters.
I would be grateful if anyone can help.
Thanks
Hello,
I am using Pophelper in R to run the algorithm implemented in CLUMPP for label switching and to create the barplots for the different K (instead of DISTRUCT).
I am getting a warning message when I merge all the runs from the same K using the function mergeQ() from the package which is slightly bothering me. Can anyone help me with this?
The warning message is as follows...
In xtfrm.data.frame(x) : cannot xtfrm data frames
Thanks,
Giulia