Science topic

Running - Science topic

An activity in which the body is propelled by moving the legs rapidly. Running is performed at a moderate to rapid pace and should be differentiated from JOGGING, which is performed at a much slower pace.
Questions related to Running
  • asked a question related to Running
Question
5 answers
I understand that it is possible to estimate the capacity of a certain soil using the freundlich and Langmuir models. I am sure that running this models is quite an acquarate approach to determine P fixation.
I decided to ask this question, because I feel that for farmers it is a very complex approach. Therefore I was wondering if there is a different process to follow that would be more in-practice oriented situations?
Relevant answer
Answer
You're correct that using models like the Freundlich and Langmuir equations to estimate soil capacity for phosphorus (P) fixation can be accurate but may be complex for farmers to implement directly. Fortunately, there are more practical and farmer-friendly approaches available:
1. Soil Testing: Regular soil testing is a simple and effective way for farmers to assess soil fertility, including nutrient availability and fixation potential. Soil testing laboratories can provide farmers with comprehensive reports that include recommendations for nutrient management based on the specific characteristics of their soils.
2. On-Farm Trials: Conducting on-farm trials allows farmers to observe the performance of different management practices, including fertilization strategies, on their own fields. By comparing the results of different treatments, farmers can gain practical insights into how their soils respond to different inputs and make informed decisions about nutrient management.
3. Soil Amendments: Applying soil amendments such as lime or gypsum can help reduce phosphorus fixation in soils with high levels of aluminum or iron oxides. These amendments can help improve soil pH and cation exchange capacity, making phosphorus more available to plants.
4. Precision Agriculture Technologies: Advancements in precision agriculture technologies, such as remote sensing, soil mapping, and variable rate application, enable farmers to manage their fields more efficiently and effectively. By accurately targeting inputs based on site-specific soil and crop conditions, farmers can optimize nutrient use and minimize the risk of phosphorus fixation.
5. Integrated Nutrient Management: Adopting an integrated approach to nutrient management that combines organic and inorganic sources of nutrients can help reduce phosphorus fixation and improve soil fertility over the long term. Practices such as crop rotation, cover cropping, and organic matter addition can enhance soil health and nutrient cycling, reducing the need for synthetic fertilizers.
By combining these practical approaches with scientific knowledge and expertise, farmers can effectively manage phosphorus fixation and optimize nutrient use in their agricultural systems. Agricultural extension services and agronomic advisors can also play a valuable role in supporting farmers in implementing best management practices tailored to their specific circumstances.
  • asked a question related to Running
Question
1 answer
I'm trying to install packages for my masters thesis. When I try to install the package 'foreign' it works but when I try to run it, I receive the message:
Error in .helpForCall(topicExpr, parent.frame()) :
no methods for ‘foreign’ and no documentation for it as a function
How do I fix this? I have R version 4.4.0 on Mac OS 11 or 12 I believe.
Relevant answer
Answer
Pleun St Write exactly what you wrote to install it?
  • asked a question related to Running
Question
2 answers
hi everyone,
I have three objectives in the protocol.
My first objective is to calculate the incidence of specific illness presentations to the emergency departments.
Just to be sure if I have inclusion and exclusion criteria in the study, then should I do the first objective before running inclusion and exclusion criteria?
Because to calculate the incidence accurately, it's essential to include every individual diagnosed during the study period.
Thanks
Relevant answer
Answer
Thanks for your answer
  • asked a question related to Running
Question
1 answer
I am performing western blot and recently i have been obtaining faint bands for the samples i had already run and had got darker bands. I wish to determine the concentration of protein in those samples, but they are now gel-ready (loaded in laemmlli buffer). can anyone please suggest a way.
Relevant answer
Answer
It's difficult because of the presence of substances that interfere with protein assays (detergent, reducing agent, dye). The approach I would take would be to run the samples on SDS-PAGE, transfer the gel to the Western membrane, stain it temporarily with Ponceau S, and photograph the stain in white light with a gel documentation instrument. Then you can integrate the density of all the bands in each lane as if they were a single band. Then you can destain the membrane and use it for immunoblotting.
  • asked a question related to Running
Question
8 answers
Hello, This is my VERY first time attempting a NEB calculation, and I must admit, I'm feeling quite confused. I would sincerely appreciate any feedback or suggestions. Here's the issue:
Following the instructions provided, I have two initial and final optimized geometries, represented by POSCAR1 and POSCAR2. Then, I used the nebmake.pl script to generate directories corresponding to the number of images.
In my case, I used 3 images, resulting in the creation of 4 subdirectories numbered 00, 01, 02, 03, and 04. Each subdirectory contains the respective POSCAR file. Currently, my parent directory looks like this:
directory_00
directory_01
directory_02
directory_03
directory_04
INCAR
KPOINTS
POSCAR1
POSCAR2
POTCAR
I also placed OUTCAR files in the initial and final structures' subdirectories, i.e., directory_00 and directory_04.
Now, ideally, I should proceed with running my NEB calculation, correct? However, after submitting the job on VASP, I kept encountering an error message stating "POSCAR: No such file or directory."
Then, I attempted to run VASP individually for each subdirectory. For instance, let's consider directory_01. For this step, I have all four input files in this directory. However, after running the VASP calculation, I encountered another error: "forrtl: No such file or directory."
Additionally, while I was going through the instructions for installing VTST codes, I noticed specific guidelines for compiling VTST code into VASP. Currently, I'm using VASP version 6.4.2.
The instructions involved downloading the vtstcode-199. They included the following steps: "To build the code, the VASP .objects and makefile need to be changed. Find the variable SOURCE in the .objects file (a hidden file in src/), which defines which objects will be built, and add the following objects before chain.o
I cannot locate the .objects file anywhere!
This doesn't make sense to me at all. Have I misunderstood the process, or is there something else going wrong?
Relevant answer
Answer
Hongye Qin Yes Exactly! Well thank you :)
I'll surely get back to you.
  • asked a question related to Running
Question
1 answer
My nanopore sequencing run has generated some ''unclassified reads''. Can anyone explain what causes them to be unclassified and how to avoid them in future?
Relevant answer
Answer
did you find any solution? Kindly share. I am also facing same problem
  • asked a question related to Running
Question
3 answers
Been trying to find a way to automate the process of identifying the gas exchange threshold when plotting VCO2 over VO2. Most seem to do it visually but a 2019 paper (PMID: 31699973) used MATLAB code to identify the inflection point (lsqcurvefit) and run linear regression above and below the inflection (film) to identify a similar threshold while plotting VO2 over watts. Any suggestions/input would be helpful.
Relevant answer
Answer
Hi Sean,
Curious if you came up with a way of automating this in the end? Let me know if you did!
Thanks
  • asked a question related to Running
Question
1 answer
I am running MaxQuant and it starts running and almost immediately stops. I go into the error folder (combined-->proc, then select Configuring 11.error). The screenshot of the error is attached. I had converted .d files from an agilent system to .mzml as I couldn't get MaxQuant to recognize the .d file for data. I also went into global parameters-->advanced and unchecked the use of .NetCore as the net core was throwing an error and I found that doing so helped others in the same boat. The data files and fasta files are all in the same location.
Relevant answer
Answer
Hi Chelsea St. Germain,
Did you overcome this issue? I also got this problem even with the files that were previously successfully analyzed.
Best,
Alex
  • asked a question related to Running
Question
8 answers
Hello, I use ANSYS explicit dynamics to simulate something and after running the simulation, I get this error message. How can we solve it?
Thank you!
Relevant answer
Answer
It was the commercial 18.2 that had the error. Thanks
  • asked a question related to Running
Question
1 answer
I am not sure these phones are commercially available yet. 6G is only available on the 15 Pro. I've no experience with Matlab. My current service provider has the UTube app which let's me see these type plots as videos from podcasters.(Typically academics in Math Departments). An example would be a transcendental equation involving one variable.
A 2020 solution to the interior tethered goat problem is the layman's description. The goat's "range" is symmetric and lobbed. Similar functions abound in nature . In some instances 3D solutions are easier. To not consume RAM I'd target 6 or so radians ie (2 x 3.141)/6 .
Relevant answer
Answer
While the exact specifications of the iPhone 15 Pro aren't officially available yet (as of May 7, 2024), it's likely that its 6GB RAM will be sufficient to run MATLAB's animated or dynamic 2D plots. Here's why:
  • MATLAB's 2D plotting requirements are modest: 2D plots typically require less memory compared to complex 3D graphics or image processing tasks.
  • Mobile hardware optimization: Phones are optimized for efficient graphics processing, and 6GB RAM is becoming increasingly common for high-end phones.
However, there are some caveats to consider:
  • MATLAB Mobile limitations: MATLAB offers a mobile version with a limited feature set compared to the desktop version. It might not support all the functionalities you'd find in the desktop MATLAB for creating advanced animations.
  • Complexity of plots: If your plots involve a large number of data points, complex calculations, or heavy customization, they might require more resources and potentially slow down the phone.
Overall, for basic animated or dynamic 2D plots, the iPhone 15 Pro's 6GB RAM should be sufficient. But for more demanding tasks, consider using the desktop version of MATLAB on a computer.
Here are some additional points to keep in mind:
  • MATLAB Mobile Availability: Check if MATLAB Mobile is compatible with the iPhone 15 Pro's operating system when it's released.
  • Storage Space: While RAM handles running applications, ensure you have enough storage space on your phone to store the MATLAB Mobile app and any data files.
I recommend checking the official documentation or contacting MathWorks (MATLAB's developer) for specific information on MATLAB Mobile's capabilities and compatibility with the iPhone 15 Pro.
  • asked a question related to Running
Question
3 answers
I would like to optimize my extraction using responce surface methodology but I‘m perplexed in choosing what design should I use? I just have 2 factors with 2 levels each, so I considered using CCD but I don’t understand the basic of the run test results which have some replication or about the lack of fits (is this crucial?). If I change it to no replication or minimizing lack of fits, is it ok? Or should I consider another design?
thank you very much for your kind response
Relevant answer
Answer
Lack of fit F test is used to test the hypothesis of model correctness: that is whether there is statistically significant model misspecification in not.
In RSM, the Lack of fit test is used to determine whether there is significant curvature in the experimental data: the first order model with interaction term is not able to capture adequate data variation and hence a second order model based design (CCD or Box Behnken type) is needed to fit the data coming from this particular region of the experimental space.
Details about the RSM process and statistical tests associated with it can be found in the following books:
1) Chemometrics: Experimental Design by Ed Morgan
2) Response Surface Methodology by Myers, Montogomery and Cook
  • asked a question related to Running
Question
4 answers
I am running a DNA PAGE after PCR (samples 6-15 are run in duplicate with the second sample digested) to determine serotonin genotypes. The ladder (well 1) is on the far right of the attached image). I would greatly appreciate any advice on how to enhance band brightness and definition, thanks.
Additional information: 5 uL ladder added, 10 uL PCR product per well, PH of the buffer is correct. Temperature of the room ~75F with Gel container NOT on ice.
Relevant answer
Answer
Paul Rutland Thank you!
  • asked a question related to Running
Question
3 answers
Hi Friends,
In some fluent simulation probelm, after run the iteration, iteration is moving for certain iteration(in my case 88th iteration) and then stops but loading , why it happened. Even if i stop, it is in same and not able to close. i could close through atrl+alt+del. Why ? anybody face this problem..?
Relevant answer
Answer
You are welcome sir
I'm available for any collaborative workflow
  • asked a question related to Running
Question
1 answer
I am trying to run a method from a column that is 250x4.6 mm, 8 um particle size into a column that is 300x7.8 mm, 9 um particle size. The makeup of the column is pretty much the same. They are each run as isocratic methods. How much more should I load or how should I adjust other parameters to comensate?
Relevant answer
Answer
In order to have a similar retention time increase the flow rate by a factor of 3.5
You can increase the load by a factor of 3.
  • asked a question related to Running
Question
1 answer
Please is there anyone who has a tutorial video on metaboanalyst 6.0 or who can assist me to run it on my LCMS/GCMS Data for untargetted metabolomics studies on soil and plants? We can collaborate on publishing together.
Relevant answer
Answer
MetaboAnalyst 6.0, huh? That's a solid choice for your untargeted metabolomics studies on soil and plants. I've used it before, so I can definitely give you Abdulwasiu Salaudeen Olawale a hand.
First off, if you're looking for a tutorial video, I suggest checking out the official MetaboAnalyst YouTube channel. They've got some great resources that can walk you Abdulwasiu Salaudeen Olawale through the process step by step. Also, the MetaboAnalyst website itself has detailed documentation that's worth a read.
As for running it on your Abdulwasiu Salaudeen Olawale LCMS/GCMS data, let's break it down:
1. **Data Preparation**: Make sure your Abdulwasiu Salaudeen Olawale data is in the right format. MetaboAnalyst usually prefers data in common formats like CSV, Excel, or TXT.
2. **Upload Your Data**: Head over to the MetaboAnalyst website, sign in, and upload your Abdulwasiu Salaudeen Olawale data. Follow the prompts to select the appropriate analysis type and upload your LCMS/GCMS data.
3. **Parameter Settings**: Set your Abdulwasiu Salaudeen Olawale parameters according to your study design and the type of analysis you're doing. This includes things like normalization, scaling, and statistical tests.
4. **Analysis and Interpretation**: Once your data is uploaded and parameters set, run the analysis. MetaboAnalyst will crunch the numbers and generate results for you Abdulwasiu Salaudeen Olawale. Take some time to interpret the results and see what insights you can gather.
5. **Collaboration and Publishing**: Absolutely, collaboration sounds like a great idea! Once we've got some solid findings, we can definitely work on publishing together. It's all about spreading knowledge and advancing science, right?
Feel free to reach out if you Abdulwasiu Salaudeen Olawale need any specific help with any step of the process. I'm here to make sure your metabolomics study goes smoothly!
  • asked a question related to Running
Question
2 answers
I am developing a competitive lateral flow device for the determination of antibiotics in milk. I am struggling with uneven staining of the test line. The test I developed has 4 test lines and I have a problem with only the third test line. I tried to change the concentration, pH, detergents, alcohols, and sugars in the test-line dispensing buffer and no improvement was achieved. The other test lines and control lines don't have such a problem. The protein I dispense on the third test line is BSA-chloramphenicol conjugate.
Could you advise on the problem?
As a sample, I use milk that doesn't contain any antibiotics.
I use liquid gold conjugate for testing.
I attached the photo of the same sample run in multiplicate (23 total runs) at the same time.
Relevant answer
Answer
Thanks for the answer.
We tried using 5% methanol in the dosing buffer, but there was no improvement.
  • asked a question related to Running
Question
3 answers
I am running simulations with oxdna using advanced techniques of sampling as Umbrella Sampling.
Relevant answer
Answer
Yes and no. There are minor differences which can cause your results to be false. Reducing the grid box would definitely decrease the total volume so that the interactions will increase just like in a more concentrated situation. However, when your ligand and/or receptor (target molecule) goes outside the walls of grid box, it comes back from the other side (Leaving the grid from right, entering from left etc...). So in that sense, you create a new collision probability for the atoms to interact. Let's assume you have a tiny drug that you want to bind to your large protein to inhibit/suppress its activity. In terms of theoretical chemistry, it may not have any chance to attack and residue on the right side of that protein. As you limit the grid-box, some right parts of that protein will enter from the left side and your drug will be able to interact that side as well which is not supposed to happen. So if you want to increase the concentration go ahead and do it. Please do not decrease the grid-box volume to satisfy that need since it will yield wrong outcomes.
  • asked a question related to Running
Question
11 answers
I am running SDS-Page western blot using 10% acrylamide gels. However, my samples are not migrating more than 55 kDa. The bands are not defined. I am using 4x Laemmli buffer with LDS from Biorad. The cell lysates are human whole brain lysates. I am wondering if the LDS has something to do with this? I tried to boil the samples at 95 degrees for 5 min; heat at 70 degrees for 10 min, all did not work.
Relevant answer
Answer
The problem was with the Human Brain Whole Tissue Lysate (Adult Whole Normal), novus. When compared with colorectal cell lysate, this difference was obvious. Thank you all for your comments.
  • asked a question related to Running
Question
1 answer
Hi everyone,
I hope you are all well.
I currently work on ATD-GC-MS for running VDA278 standard, but I experience the error message "Extended Trap Des. Equilibrium" on my ATD panel. As the error message occurs, we would not have peaks in my chromatography (just like a blank test). After I checked it, the problem is on the column flow rate could be unstable before trap heating.
Nowadays, I run n-alkanes analysis with Tenax TA sorbent tube (methanol be the solvent, liquid solution directly spiking 1¬2µL on Tenax TA tube). Trap is packed by Tenax TA as well. The most tricky portion is this error always occur around two weeks after PE engineer maintenance. (While engineer here, the machine is running well. However, after running some tubes, the error occurs unpredictably) The system no leak be detected, air-water looks really nice.
So I'm wondering if anyone have the same issues before and willing to share your experience for trap desorption equilibrium extended.
These parameters for which I'm running now:
350ATD
Temperature(C)
Tube: 280
Transfer Line: 280
Valve: 280
Trap Low: -30
Trap High: 280
Trap Rate (C/s): 99
Times (min)
Tube Desorb: 20
Trap Hold: 20
Trap Desorb (Desorb2): 1
Purge: 1
GC Cycle: 85
Pneumatics (mL/m)
Inlet Split: 44
Outlet Split: 19
Tube Desorb: 40
Column: 2
Col/Trap Desorb: 2
GC Column: HP-ULTRA 2 50m, 0.32mm(Diam), 0.52 µm(Film)
GC Temperature: 40C for 2min, 3C/min to 92C, 5C/min to 160C, 10C/min to 280C holding 10min
Relevant answer
Answer
Could it be that your cold trap packing material is moving around and sometimes creating channels? Sometimes it is packed firmly and sometimes it is loose.
  • asked a question related to Running
Question
2 answers
I am currently using gmxMMPBSA tool for MMPBSA analysis. I have a 100ns gromacs trajectory for protein-ligand complex which contains 10000 frames. Even if I run the MMPBSA analysis with interval of 5, it takes one day to complete analyzing complex contribution alone. Please suggest me a solution to get result in half day.
Between can I take the last 10ns of the trajectory and apply interval of 5 for analysis. will it give be good result in small duration.
Relevant answer
Answer
Dear Sree Haryini ,
Here is suggestion from me:
1. Reduce the number of frames: As you suggested, using a smaller portion of the trajectory can significantly reduce computation time. However, make sure that the selected portion is representative of the system's behavior.
2. Increase the interval: Instead of analyzing every frame, you can increase the interval between frames. For example, you can analyze every 10th or 20th frame instead of every 5th frame. This will reduce the number of calculations needed.
3. Parallelize the computation: Calculate by using multiple core, you can parallelize the computation to speed up the analysis. Many MMPBSA tools support parallel computing. If you use the tool by Valdes you can add mpirun -np X and X can be replace with number of processors. Example mpirun -np 16. But you also need to consider the RAM available.
4. Calculate MMGBSA - As suggested by Martin Rosellen calculate GBSA is indeed will reduce the workload for calculation. I have try before that calculate the GBSA take 2 hours while PBSA take almost 1 days in the same trajectory.
  • asked a question related to Running
Question
4 answers
Hello:
I work in Abaqus 2017 a lot. I have noticed that when I submit any abaqus job (that does not contain any subroutine!) via parallel processing (aka cpus=4 or 8), it runs just fine!
But, when i submit an abaqus job that contains a user subroutine with it via parallel processing, it won't run at all. It will not abort, or terminate, or do anything. The job will freeze.
The only way to run an abaqus job that has a user subroutine with it, is to run the job with single processing. This takes up a lot of time.
I do use the allocatable arrays for the job, and define them in my subroutine. Is there something I should include in my subroutine?
Could you please tell me why is it that the abaqus jobs (with user subroutines) get completely stuck with cpus of 2 or higher? Please?
Thank you very much!!!
Relevant answer
Answer
I know it's been a long time since you asked and you've probably solved the problem. But I'm experiencing a similar problem and have the same difficulty.
It seems that in your code you are using global arrays when referring to pointers, as in.
pt21 = SMAFloatArrayCreate(21,101,0.0)
Although they are global in the sense that all threads have access to them, they do not guarantee thread safety. From what I read in the documentation, you would need to use mutex to lock the threads (losing performance).
Or simply define it as a local array (in the sense that it's local to a thread and doesn't need any locking for access, therefore thread-safe). Something like this:
pt21 = SMALocalFloatArrayCreate(21,101,0.0)
And they return with
... = SMALocalIntArrayAccess(ID)
If you managed to solve the problem, please give me feedback on how you did it. I'm facing the same problem and I can't compile my subroutines on demand.
  • asked a question related to Running
Question
4 answers
I'm using covariance-based SEM software. Do I need to normalize the data before I run the SEM model? The SEM model is fine and the data is large (744).
1. Do I need to normalize the data before I run SEM
2. Would the central limit theorem not apply and so I need not normalize the data
3. Normalizing would change the basic characteristics.Would the findings still be valid
Relevant answer
Answer
In CFA and SEM, the default estimator in most software programs is maximum likelihood (ML). ML estimation is based on the assumption of multivariate normality.
Ignoring multivariate non-normality and using regular (uncorrected) ML estimation can lead to biased tests of model fit (e.g., chi-square), biased standard errors, and incorrect tests of statistical significance. The parameter estimates (e.g., factor loadings, regression and path coefficients) are relatively unaffected by non-normality.
If your data are non-normal, rather than applying data transformations, you can simply use robust ML estimation (e.g., Satorra-Bentler correction; Bollen-Stine bootstrap) to obtain corrected standard errors and test statistics. Alternatively, you can use an estimation method that does not require normality (e.g., weighted least squares or WLS estimation). However, WLS requires very large samples to provide valid results, so most researchers choose use the first option (robust ML estimation).
You can also check out my Youtube videos on this topic here:
  • asked a question related to Running
Question
3 answers
After staining and solidifying my agarose gel, I load the first well of the dried agarose gel with the TrackIt Ladder (10488058) from Invitrogen and load my unstained DNA samples into the other wells. I fill the electrophoresis device on top of the agarose gel without covering it and run the electrophoresis at 35 V (5 min) and 50 V (5 min). Then, I cover the gel with TBE 7 mm above and run the electrophoresis at 65 V (60 min).
When I take the photo with Trans UV BioRad, the ladder bands are distinct and stain well, but the DNA samples are not.
I need help with this, I have tried it with other Invitrogen ladders and without them with the same procedure and the DNA bands are visible, except on this TrackIt Invitrogen ladder.
Relevant answer
Answer
you do appear tp have weakly visible bands in this image so I would check that your sample loading dye is dense enough to hold the sample in the well. If so then run 3 or 4 more cycles of pct to get more product. You mention that the gel is not covered with running buffer. It is usual to cover the gel completely with running buffer to get good results. You could just load more sample and the gel would look better in this case but more cycles would be better
  • asked a question related to Running
Question
4 answers
Flash Flooding in UAE: Could have been avoided?
Risk of flash flooding in UAE very well known.
Changes in land use arising from enhanced urbanization over last few decades. And as a result, hydrological consequences associated with flood risk needs to be faced, particularly, shorter times to peak discharges.
Still, under normal circumstances, 147 mm in a day is not at all a big spell to be concerned about. However, UAE has altogether has a peculiar hydrological system.
1. UAE does not have rivers flowing downhill.
2. UAE’s flood terrain remains characterized not by conventional water, but by wind (which is no more stable under the flow of water).
Flood water has to flow along the base of the dune formations, dictated by wind direction. And, these dunes in UAE vary over orders of magnitude (some running for few tens of meters, while others running for tens of kilometers)
3. A highly variable infiltration rates across the entire catchment.
4. Establishing connectivity between overflowing upstream ponds and subsurface flow paths remains a basic problem.
5. Scouring and erosion dictating the variations in the course of surface water flows.
6. Absence of natural river mouths discharging floodwaters into the Sea.
Relevant answer
Answer
Well, sure. It could have been avoided. Most are engineers in this thread (not me, full disclosure), so you know--we can build anything to manage any flows produced by any event you'll likely encounter (at least for some period of time). But, of course, the question is, should we? It's always a balance between stormwater management infrastructure lifespan versus the chance of seeing a large, rare event when you decide what to build and decide what to spend on such facilities. At this stage, as the climate continues to adjust to excessive GHGs, my advice is to buy the land one needs for future expansion of stormwater facilities, but build infrastructure now that is sized more closely to today's expected flows. It's possible all of this is happening in the UAE now, but I know nothing about their programs and have no time today to head down a google rabbit hole to learn them. Perhaps others can chime in on what goes on there. Cheers!
  • asked a question related to Running
Question
3 answers
I have used the DLS machine (Zetasizer Software) to measure the size of micelle prepared. For the first 2 runs the quality report shows "Good", while the 3rd measurement shows as "Too Polydisperse" for thee same sample; the Z-avg value(26.48,26.64,27.93) and PDI value(0.558,0.563,0.502) are as stated. (The 3 runs are automatically done by the machine.)
Relevant answer
Answer
researchers can gain valuable insights into the reliability of their data and take necessary steps to address any underlying issues.
  • asked a question related to Running
Question
3 answers
I am studying leadership style's impact on job satisfaction. in the data collection instrument, there are 13 questions on leadership style divided into a couple of leadership styles. on the other hand, there are only four questions for job satisfaction. how do i run correlational tests on these variables? What values do i select to analyze in Excel?
Relevant answer
Answer
First, you need to do the correlation between your target variable and each of your potential independent variables. After checking what independent variables are the more correlated to your target variable (as mentioned earlier coefficient correlation closest to - 1 or + 1). Once, you decide according to these correlation coefficients which variables you can select for your model, you need to ensure that there will be no multicollinearity in your model. To ensure that, for each independent variable you do correlation tests again. If two independent variables are too correlated, you should introduce only one in your model (e.g. the variable which had the higher correlation rate with your dependent variable).
  • asked a question related to Running
Question
3 answers
I'm running MD calculations in VASP for pi-stacked dimer (triphenylene) and I constrained the internal coordinates of individual monomers using ICONST file.
The calculations always stop after one ionic step due to Error: SHAKE algorithm did not converge! Error too large, I have to terminate this calculation! Can anyone give me guidance on which parameters I can change to minimize this error.
the error file is attached for your reference.
Thank you in advance!
Relevant answer
Answer
I also met this question, I dont know what to do
  • asked a question related to Running
Question
3 answers
Hi,
When there is only a single analyze step in my job, multi core processor can successfully be used to run it.
However, when the job have more than one analyzing step, multi core processor can seldom to be used successfully to run it. Always only the first step can be finished, after the first step, the subsequent step would not be proceeded, and the status always showed "running".
Is there anybody know the reason for it?
Thxs.
Relevant answer
Answer
@Pedro Weiss Mattioli
Hah, interesting. I am surprised that there is still people can find my question post even at this time. I am pretty thanks for your sharing and suggestion. For what your mentioned, that's correct. Capacity of your RAM or storage space can be the reasons account for the error, and I actually confirmed that. But enven without these issues, this error can still happened sometime. According to what I heared, similar question can also appear in other computational mechanics field. Weire, huh? Anywhy, considering it is a problem not too much to appear, I do not troubled by it anymore currently.
Thank you.
  • asked a question related to Running
Question
5 answers
Hello all:
Any help appreciated. I am running an intervention with students in a esl environment. I want to compare knowledge and awareness of the topics pre and post intervention. The issue is the survey needs to be anonymous. I have no gaurantee that the same students will be taking part in both questionnaires (althought there will be significant overlap). I will not be running a placebo group.
Am I looking at independent t tests, or something different? Can I infer any change is due the intervention if its not a paired t-test?
Again, any help apprecited, if anyone could direct me!
Relevant answer
Answer
Robert Matthews is it possible to ask for a personal code like:
1. First letter of your mother's first name
2. Last letter of your mother's birth name
2. Second letter of your father's first name
3. First letter of your birth place
4. Day of your birth
For example RROG12
This ensures (almost) that every participant will have a unique code, which can easily be reconstructed at several time points.
Joshua Michael Staley how did you take into account that Robert is probably not able to match the pairs, therfore, he will not be able to calculate diference scores?
  • asked a question related to Running
Question
2 answers
I am running a RNA ligand simulation on gromacs , i encountered a problem in which the number of coordinates in coordinate file (EM.gro, 2171) does not match topology (topol.top, 30886) file. PLease help me with this problem.
Relevant answer
Answer
thank you so much
  • asked a question related to Running
Question
1 answer
On dftbephy code : How can we fix the problems for run python bands.py and python dftbephy-epc.py ?The problems on my system are ; bands.py line 92, dftbephy/calc.py line 176 and 253, dftbephy/epc.py line 83, dftbephy-epc.py line 138 and python/site-packages/scipy/linalg/decmpy.py line 578 there are errors which stop run. How can I resolve the issue or fix the problems as shown on screenshot file below.
Relevant answer
Answer
1.The numpy.linalg.LinAlgError is an exception that occurs when you're performing linear algebra operations using the NumPy library, and a mathematical condition prevents the correct execution of the operation. This error usually happens when you attempt to invert a singular matrix (a matrix that has a determinant of zero) or perform an operation that relies on matrix inversion, such as solving a system of linear equations or computing the inverse of a matrix.
2.use other versions of python for run this code , or search the best version of python for packages used in your .py file.
- Also don't forget to update the -apt.
  • asked a question related to Running
Question
2 answers
Can anyone assist with running the Linkage Analysis Tool 'Easylinkage' or any alternative tool for conducting linkage analysis and calculating LOD scores?
Relevant answer
Answer
Linkage analysis is a method used to map genetic loci associated with specific traits or diseases within families. It helps identify regions of the genome that may contain genetic variants contributing to the trait of interest. There are several software tools commonly used for linkage analysis, such as:
1. MERLIN (Multipoint Engine for Rapid Likelihood Inference):
2. Allegro:
3. GeneHunter:
4. SOLAR (Sequential Oligogenic Linkage Analysis Routines):
5. PLINK (PLatform for INtegrated Knowledge):
These are just a few examples of the linkage analysis tools available. The choice of tool depends on the specific requirements of your study, the type of genetic data you have, and the analysis methods you intend to use.
  • asked a question related to Running
Question
8 answers
Opening Islamic banking by conventional banks fact.
Relevant answer
Answer
With respect to ethical principles, conventional banking and Islamic banking are not a good match.
Source of the synopsis:
  • asked a question related to Running
Question
1 answer
Does anyone know what I have done wrong = error message PROCESS v 4.2 moderation analysis with categorical moderator... 'one of the categories contains only a single case?'
The moderation variable is categorical with 4 values... I have run this analysis before and it worked perfectly... where have I made my mistake please?
Relevant answer
Answer
Did you code the grouping variable wrongly? It says that one group has a sample size of N = 1. If it worked before, I would guess that you recoded something, maybe by accident. Check your raw data, for example: get the frequencies of every category in your grouping variable. Does that correspond to the sample sizes for each category that you gathered?
Otherwise show your data here on RG.
  • asked a question related to Running
Question
6 answers
With regards to the journal by Kasarla & Pathak in 2023 entitled Tenets of Specimen Management in Diagnostic Microbiology, is there any significant effects of specimen management in microbiology to the overall results of laboratory tests? Why is it important to ensure the quality and state of the specimen before running it in the lab?
Relevant answer
Answer
Specimen management in microbiology plays a critical role in ensuring the accuracy and reliability of laboratory test results. Here are some significant effects of specimen management on microbiology laboratory tests:such as Contamination Prevention: Proper collection, handling, and transport of specimens help prevent contamination, which can lead to false-positive results or misinterpretation of test results. Contaminated specimens can introduce foreign microorganisms into the culture, interfering with the growth and identification of target pathogens.and Viability of Microorganisms,Preservation of Microbial Characteristics,Optimal Growth Conditions,Timeliness of Testing, Quality Control and Assurance, specimen management is a critical aspect of microbiology laboratory practice, and deviations from best practices can significantly impact the accuracy and reliability of test results.
  • asked a question related to Running
Question
4 answers
I am facing some problems while running a project in TRNSYS.
Previously, I created the project in TRNSYS 17 and was running fine.
But when i try to run the same project on TRNSYS 18 i get the following warnings,
1) "A duplicate of TYPE508 was found in "TESS_HVAC_v17.2.01_64.dll""
2) Wrong version of gentrn.dll was used. TYPE 56 expects version 255.000, but TRNBuild used gentrn.dll version 257.000was loaded.
After I got this error, I thought the problem was with the version (as installed TRNSYS 17 is 32 bit and TRNSYS 18 is 64 bit) so i recreated the project on TRNSYS 18 and ultimately when i try to run the project i get the similar warning message in the recreated project on TRNSYS 18.
It would be great if i could know what the problem is and how to solve it.
Relevant answer
Answer
can you share how did you solve it?
  • asked a question related to Running
Question
1 answer
Hello I am trying to find a website or freeware that predicts amphipathic helices from AA sequence.
I have been trying AMPHIPASEEK (AMPHIPATHIC IN-PLANE MEMBRANE ANCHORS PREDICTION),
It does not seem to be running.
Also any tips as to submission (small sequence of helix Vs whole protein, etc) will be appreciated
Thanks everybody, Neal
Relevant answer
Answer
For those still interested. A nice tool to (among other things) draw a helical wheel plot is Heliquest: https://heliquest.ipmc.cnrs.fr
This user-friendly tool also enables the so-called Heliquest-based Eisenberg plot approach to distinguish between amphipathic, transmembrane helices etc., see for more details:
  • asked a question related to Running
Question
2 answers
I have a trivariate Panel VAR system with the following variables: LnGDP (Natural log of GDP), Fiscal Expenditure (as % of GDP) and Interest rate ( in %). LnGDP and interest rate are stationary at levels but fiscal expenditure is not. The first difference of the Fiscal Expenditure is stationary. Note that while checking stationarity of fiscal expenditure, when I include drift, the panel is stationary.
My first question is if I run a Panel VAR model based on the above information, what can be the future consequences?
My second question is provided the above regression is non-spurious, how do we interpret the coefficients in the model?
Thank You
Relevant answer
Answer
Thank You sir for your reply. I will look into this!
  • asked a question related to Running
Question
4 answers
Hello Everyone, how to run DFT-D3 with Gaussian 09?
With Regards
Relevant answer
Answer
The below link can effectively help you:
See "Empirical Dispersion" in "More" tab.
  • asked a question related to Running
Question
4 answers
Every so often, someone presents work that is derivative of mine (e.g., Lightspeed Expanding Hyperspherical Universe or LEHU).
That particular hypothesis or postulate is impossible to defend. Richard Feynman proposed the hyperspherical topology in a lecture, but he couldn't defend it because of the problems it creates.
My theory - The Hypergeometrical Universe Theory (HU) solves those problems.
It emboldened many copycats who believed they could jump in the wagon and take part in the HU's model (for example, LEHU) and run.
By the way, I should clarify that LEHU is part of HU's model, not because I was the first to propose it, but because I am the only one who can defend it.
That creates flawed models where the flaws are obvious.
Here, I present a critique of one of these models. Feel free to disagree or agree in the comments.
Relevant answer
Answer
Thank you for your kind comment. I am sure the fellow will move on undisturbed. Hopefully, the horrendous critique will hold other people back...:) They are a dime a dozen...:)
If you are at LinkedIn look me up. I will be presenting videos explaining the theory. I am just preparing the explanation of the Fine Structure Constant...:)
  • asked a question related to Running
Question
1 answer
Predictive Relevance in Smartpls
Relevant answer
Answer
Hello Mr
Please more explain about your request!
Regards
Mahdi Qanavati
  • asked a question related to Running
Question
8 answers
Hello
I am new to CRISPR-Cas9 and, for my project, I started to collaborate with another lab that claims expertise on this technique. The objective is to produce 3 cell lines, each with a knockout for a different gene. They did all the process of cloning the plasmid until viral transfection, where a point mutation was induced and an antibiotic resistence gene was added and then selected for.
From what I understand, all cells they gave me are resistant to antibiotic but not all cells necessarily have the mutation or the same mutation, and the solution for that, as far as I know, would be single cloning. However, they said single cloning would not be necessary and that all I had to do was run a western blot for the targets in this heterogeneous cell pool to be sure the knockouts worked. They said if the band was weaker or had a different size it would prove it worked. They also said I could already run phenotypic experiments and it would show it worked as well, but, according to them, the definitive proof is the western blot.
Because I am completely new to this, I just wanted to know if such decisions make sense. I just find it a bit weird because I would have to do single cloning anyway in the end. It wouldnt make sense to me to ever try to check the phenotype of a heterogenous cell pool, if I don't even know the mutation rate of the cells, and I assume such rates might even change every time I plate those cells. Moreover, I think that there might be the possibility the protein could maintain the same size even if it became dysfunctional with the knockout.
I found those decisions weird, but I don't have enough experience with the technique to have a solid opinion. Any thoughts?
Relevant answer
But maybe isolate genomic DNA and try PCR with Sanger sequencing or T7EI assessments rather than western since you will have a lot of wells (aka clones) to select from.
  • asked a question related to Running
Question
1 answer
Hello everyone,
I am trying to run Rietveld refinement on my sample which has two phases. The primary phase is quite dominant with most of the peaks, but there is only one peak of the secondary phase. I read somewhere that Rietveld refinement is not possible/will not yield good results if I run refinement on a pattern with les number of peaks. Can anyone please clarify whether I can run the refinement on a pattern with just one secondary peak .
My scan is from 10 to 70 degrees with a step size of 0.0168.
Relevant answer
Answer
Hello, Vignesh.
A secondary phase with only one peak is quite problematic because we may not be able to identify it. But if you have a clue about it, we may try to find it.
Best regards
Ricardo Tadeu
  • asked a question related to Running
Question
1 answer
What is a super vacuum? Is the earth in a vacuum? And what is dark energy?
It has not been proven until today and nature has always applied and proven exceptions and violations in the accepted theories many times in the past. That these were merely human formalisms and experimental artifacts and exploiting the limits of technology, and physical limits and laws are constantly being broken and bent in nature. Hereby we will attempt to show theoretically why and how there is and experimentally evidence in our universe of vacuum space, either in its theoretically idealized absolute form, thus free space or the partial vacuum that characterizes the vacuum of QED or QCD. And its zero-point energy and oscillations may actually be the greatest proof in nature for super energy.
It is possible without violating causation. that the apparent effect of "nothing" of vacuum space may be evidence for it
superluminocity and all this time it was hidden right in front of us. We are here trying to answer a fundamental question of physics, why the vacuum is basically space to us looks like nothing on the assumption that "nothing" exists in nature, and why a hypothetical superluminous vibration, a particle the size of Planck creates apparent nothingness in our spacetime. The novelty of the research here infers that free space is dark energy and that superluminous energy.
Stam Nicolis added a reply:
(1) Depends what is meant by ``super vacuum''. The words must, first, be defined, before questions can be asked. As it stands, it doesn't mean anything.
(2) To a good approximation the earth is moving around the Sun in a vacuum, i.e. its motion can be described by Newtonian mechanics, where the only bodies are the Earth and the Sun and the force between them is Newton's force of gravitation.
(3) Dark energy is the property of space and time that describes the fact that the Universe isn't, simply, expanding, but that this expansion is accelerating. To detect its effects it's necessary to measure the motion of bodies outside our galaxy.
To understand all this it's necessary to study classical mechanics-that leads to understanding the answer to the second question-and general relativity-in order to understand the answer to the third
László Attila Horváth added a reply:
Dear Abbas Kashani ,
The graviton - which creates or capture elementary X-rays and gamma rays- , by itself, it can be considered almost like a super vacuum.
Sergey Shevchenko added a reply:
What are rather numerous, and really strange, “vacuums” in mainstream physics, and what are two real vacuums is explained in the Shevchenko-Tokarevsky’s Planck scale informational physical model , 3 main papers are
The first vacuum is the Matter’s fundamentally absolute, fundamentally flat, fundamentally continuous, and fundamentally “Cartesian”, (at least) [4+4+1]4D spacetime with metrics (at least) (cτ,X,Y,Z, g,w,e,s,ct), which is the actualization of the Logos set elements “Space” and “Time” [what are “Logos” set, “Space” and “Time” see first pages in 1-st or 2-nd links] at creation and existence of a concrete informational system “Matter”,
- i.e. this vacuum is a logical possibility for/of Matter’s existence and evolving, and so is by definition nothing else than some fundamentally “empty container” , i.e. is “real/absolute” vacuum.
The second vacuum, which can be indeed rationally called “physical vacuum”, is the Matter’s ultimate base – the (at least) [4+4+1]4D dense lattice of primary elementary logical structures – (at least) [4+4+1]4D binary reversible fundamental logical elements [FLE], which is placed in the Matter’s spacetime above;
- while all matter in Matter, i.e. all particles, fields, stars, galaxies, etc., are only disturbances in the lattice, that were/are created at impacts on some the lattice’s FLE. At that it looks as rather rational scientifically to assume, that such vacuum really existed – that was the initial version of the lattice that was created/formed at the “inflation epoch”, more see the SS&VT initial cosmological model in section “Cosmology” in 2-nd link.
After this initial lattice version was created, in the lattice a huge portion of energy was pumped uniformly globally [and non-uniformly locally], what resulted in Matter’s “matter” creation, which we observe now.
Since all disturbances always and constantly move in the lattice with 4D speeds of light, now can be only some “local physical vacuums”, etc.;
- though that is really quite inessential – the notion “physical vacuum” is completely useless and even wrong, since the really scientifically defined FLE lattice is completely enough at description n and analysis of everything that exists and happens in Matter. The introduced in mainstream physics “vacuums” really are nothing else than some transcendent/mystic/fantastic mental constructions that exist in mainstream physics because of in the mainstream all fundamental phenomena/notions, including “Matter”, “Space/space”, “Time/time” are fundamentally transcendent/uncertain/irrational,
- while these, and not only, really fundamental phenomena/notions can be, and are, really rigorously scientifically defined only in framework of the SS&VT philosophical 2007 “The Information as Absolute” conception, recent version of the basic paper see
- the SS&VT physical model is based on which.
More see the links above, a couple of SS posts in
Abderrahman el Boukili added a reply:
Super vacuum, in my view, is just the vacuum itself, that is, the channel through which the universe of particles and anti-particles intersects.
Courtney Seligman added a reply:
For all practical purposes, the Earth is moving through a vacuum as it orbits the Sun, as there is so little of anything in any given place that only the most sensitive instruments could tell that there was anything there. But there are microscopic pieces of stuff that used to be inside asteroids or comets, and pieces of atoms blown out of the Sun as the Solar Wind, and cosmic rays that manage to get through the Sun's "heliosphere" and run into anything that happens to be in their way. So though the essentially empty space around the Earth would qualify as a vacuum by any historical standard, it isn't an absolutely perfect vacuum. And I suppose a "super vacuum" would be a region where there isn't anything at all, including not only matter, but also any form of energy (which has a mass equivalence of sorts, per Einstein's Special Theory of Relativity); and if so, then "super vacuums" do not exist.
Harri Shore added a reply:
The concepts you're exploring—super vacuum, dark energy, and the nature of the vacuum in quantum electrodynamics (QED) and quantum chromodynamics (QCD)—touch on some of the most profound and speculative areas in modern physics. Let's break down these concepts to provide clarity and context for your inquiry.
Super Vacuum
The term "super vacuum" is not widely used in mainstream physics literature but could be interpreted to mean an idealized vacuum state that is more "empty" than what is typically considered achievable, even beyond the vacuum state described by quantum field theories. In standard quantum field theories, a vacuum is not truly empty but seethes with virtual particles and fluctuates due to quantum uncertainties, known as zero-point energy.
Is the Earth in a Vacuum?
The Earth is not in a vacuum but is surrounded by its atmosphere, a thin layer of gases that envelops the planet. However, outer space, which begins just beyond the Earth's atmosphere, is often described as a vacuum. This is because outer space contains far fewer particles than the Earth's atmosphere, making it a near-vacuum by comparison. It's important to note that even the vacuum of outer space is not completely empty but contains low densities of particles, electromagnetic fields, and cosmic radiation.
Dark Energy
Dark energy is a hypothetical form of energy that permeates all of space and tends to accelerate the expansion of the universe. It is one of the greatest mysteries in modern cosmology, making up about 68% of the universe's total energy content according to current observations. The exact nature of dark energy is still unknown, but it is thought to be responsible for the observed acceleration in the expansion rate of the universe since its discovery in the late 1990s through observations of distant supernovae.
Vacuum Energy and Superluminosity
Vacuum energy refers to the energy that exists in space due to fluctuations of the quantum fields, even in the absence of any particles or radiation. It is a manifestation of the Heisenberg uncertainty principle in quantum mechanics, which allows for the temporary creation of particle-antiparticle pairs from "nothing."
The concept of superluminosity or superluminal phenomena (faster-than-light phenomena) is speculative and not supported by current mainstream physics, as it would violate the principle of causality, a cornerstone of the theory of relativity. However, there have been theoretical explorations of conditions under which apparent superluminal effects could occur without violating causality, such as in the context of quantum tunneling or warp drives in general relativity.
Vacuum Space as Evidence of Superluminous Energy
Your hypothesis suggests that vacuum space or "nothingness" might be evidence of a superluminous energy or vibration at the Planck scale that creates the apparent emptiness of space. This is a speculative notion that would require new theoretical frameworks beyond the standard model of particle physics and general relativity. It also implies that dark energy, the force behind the universe's accelerated expansion, could be related to this superluminous vacuum energy.
While current physical theories and experimental evidence do not support the existence of superluminous phenomena or energies, the history of science shows that our understanding of the universe is constantly evolving. Theoretical proposals that challenge existing paradigms are valuable for pushing the boundaries of our knowledge and prompting new avenues of experimental and theoretical investigation. However, any new theory that proposes mechanisms beyond established physics must be rigorously tested and validated against empirical evidence.
Relevant answer
Answer
1. A vacuum is a region of space with no matter; a super vacuum could be defined in one of two ways, depending on whether it is a concept, or a description of current technology. In the first instance, it with be a region of space with neither matter nor energy (in which case, unless an extremely small region, it does not exist, because any part of space big enough to see without a microscope would at least have light of some sort passing through it (e.g., at least the Cosmic Background Radiation). In the second instance, it could be used to describe a "laboratory" vacuum which has far less matter in it than any previously created laboratory vacuum.
2. The Earth is in a region that is essentially a vacuum, because most of the space between the planets has practically nothing in it at any given time. However, there are cosmic rays and the Solar Wind everywhere, so though merely pieces of atoms, there is some stuff everywhere in space; but the amount is so small that for all "practical" purposes, it is a vacuum.
3. Dark energy is a fiction created by cosmologists to explain why, despite having too little mass for the gravity of that mass to fight the tendency of empty space to expand (per Einstein's General Theory of Gravity), the geometry of the Observable Universe is "flat", which would require something to add up to 100% of the "critical mass" of the Universe, and since visible and unobservable ("dark") matter makes up at most 27% of the critical mass, cosmologists created the concept of dark energy to make up the remaining 73%. However, there is no need to presume that the Universe is flat. Just as the Earth is a globe but looks essentially flat (on the average, and particularly at sea) because you can't see enough of it to see its real shape, the Universe is actually what is called "hyperbolic" in shape, which is exactly what you would expect if its mass is less than the "critical" mass. However, almost all cosmologists are convinced by various characteristics of the Observable Universe that the "real" Universe is at least 1000's and perhaps 10 to the 1000's of times bigger than what we can see, what we can see is too small to see its real shape, so it just looks "flat". Since by definition we can't see anything but the "Observable" Universe, we will never be able to see the true shape of the Universe; so "dark energy" will remain a "useful" fiction for calculation purposes for the foreseeable (if not infinite) future; but I am certain that we will never figure out what it is, because it doesn't exist. (Having been both a mathematician and a professional astronomer, I can assure you that even when something like "dark energy" doesn't exist in real life, creating a mathematical model that includes it, in order to make the math work right, is considered perfectly OK by professional mathematicians.)
  • asked a question related to Running
Question
1 answer
It is my first time to deal with Label free proteomics data. The data was generated from Mayfly, it doesn't have annotated proteins. I used "uniprot_sprot.fasta" as a reference sequence, which has about 500k protein entry. The goal is to identify the protein and do differential analysis downstream. I used standard LFQ setting in maxquant and found only 85 entry in proteinGroups.txt file which is too small for the whole proteins in mayfly. When I run with a smaller set protein we have generated from Mayfly ( which partial protein) as a reference sequence, I got 330 entry in proteinGroups.txt. I expected a higher number of identified protein group when I use swiss sprot reference sequence. Any suggestion what might went wrong?
Relevant answer
Answer
I think the problem is mainly due to 'uniprot_sprot.fasta' having too few overlapped sequences with Mayfly (currently, there are only 3 proteins for Rhithrogena germanica and around 500 for Rhithrogena spp. in Uniprot. ). Therefore, only a few peptides are identified, and eventually, the FDR filtering kicks in early, cutting the list of identified peptides short. I would obtain the full protein sequence of Mayfly before conducting proteomics.
  • asked a question related to Running
Question
3 answers
I am running a bacterial culture and reading the bacterial density on a spectrophotometer at 600 nm. I blanked with the culture medium and also ran a tube without inoculation as a control.
After some time, the inoculated samples gives me 0.341 but the control (medium alone, as the blank, but kept in the thermostat for the same period as the bacterial culture) gives me 0.022.
What is the threshold in OD600 units to say that the control is not contaminated?
Thank you
Relevant answer
Answer
That's the point: The control should have the same OD as there blank (0). How much is 0 in spectrophotometry? Does 0.001 count as contaminated? 0.010? 0.100? The purpose of putting a non-inoculated control was to check if the medium was contaminated.
  • asked a question related to Running
Question
1 answer
Dear all
I am trying to run RF DIFFUSION tool locally. While running the script inference.py I am getting error "NVTX functions not installed. Are you sure you have CUDA build? "
I have latest version of CUDA with NIVIDIA GPU. while installing the software I have done Conda Install SE3-Transformer installation.
Any suggestion reg running this software locally will be highly appreciable. Thanks in advance !
Relevant answer
Answer
Now I can run it successfully. I am writing down the answer in case if someone face same error in future. I have updated SE3nv.yml environment and reinstalled the pytorch version. So now the setup is running for every example without any issue.
  • asked a question related to Running
Question
1 answer
I have just started using Bader Charge.
As per the instructions, I first unpacked all the required files (for instance, Bader binary files, source code files, and chgsum.pl script). I ran electronic SCF calculation by adding these tags to my INCAR file:
LAECHG = .TRUE.
NSW=0
LWAVE = .FALSE.
LCHARG = .TRUE.
After it generates AECCAR0, AECCAR1, and AECCAR2 files. My system has 192 atoms. So, by using the chgsum.pl script to generate the CHGCAR_sum file, I received this message:
``` Atoms in file1: 0, Atoms in file2: 0
Points in file1: 7.1966367651e-06, Points in file2: 7.1966367651e-06 ```
Which doesn't make sense to me, since I checked all my files along with the CHGCAR files, they have correct dimensions and the file is complete. Then I tried to use Bader:
``` ./bader CHGCAR -ref CHGCAR_sum ```
I received this message:
``` GRID BASED BADER ANALYSIS (Version 1.05 08/19/23)
OPEN ... CHGCAR
VASP5-STYLE INPUT FILE
DENSITY-GRID: 160 x 160 x 160
CLOSE ... CHGCAR
RUN TIME: 0.87 SECONDS
OPEN ... CHGCAR_sum
VASP5-STYLE INPUT FILE
forrtl: severe (24): end-of-file during read, unit 100, file
/home/tyadav/54209/CHGCAR_sum
Image PC Routine Line Source
bader 000000000048A056 Unknown Unknown Unknown
bader 000000000040EC7F Unknown Unknown Unknown
bader 0000000000412CA6 Unknown Unknown Unknown
bader 0000000000416CA9 Unknown Unknown Unknown
bader 0000000000401F08 Unknown Unknown Unknown
bader 000000000040187D Unknown Unknown Unknown
bader 0000000000514D41 Unknown Unknown Unknown
bader 000000000040175E Unknown Unknown Unknown ```
Can someone possibly help me with this issue? I really appreciate any possible suggestions.
Relevant answer
Answer
Problem has been resolved!
  • asked a question related to Running
Question
1 answer
Dear colleagues,
I've been running Blast2GO for approximately two weeks when suddenly my computer froze, requiring a restart. Unfortunately, after restarting the PC, I was unable to recover the data from Blast2GO. I'm seeking advice on how to recover the lost data, or if recovery isn't possible, how to prevent this from happening again in the future.
Thank you for your assistance.
Relevant answer
Answer
  1. Check if Blast2GO has an autosave feature. Many software programs automatically save progress periodically, which might contain your lost data.
  2. Look for temporary files or backups on your computer. Sometimes, software creates temporary files or backups that may contain the lost data.
  3. Contact Blast2GO support for assistance. They may have specific procedures or tools to help recover lost data or provide guidance on how to proceed.
  • asked a question related to Running
Question
2 answers
Hi,
I am a graduate student using a Shimadzu HPLC with a PDA detector.
This instrument was newly installed. Only the tech at installation and myself have run samples.
I am using it for research on cannabinoids specifically CBD and THC and only running diluted standards right now to modify the method.
I am using water (0.1% formic acid) as mobile phase A, and acetonitrile (0.1% formic acid) as mobile phase b. The total run time is 10 min. I have a gradient that starts at 70% B for 3 min, then ramps up to 90% B for 2 min, hold for 1 min, then back down to 70% and hold for the rest of the time. The flow rate is 0.2 mL/min. My injection volume is 1 uL.
The column is a Shimadzu C18-120, 3 um 3.0 x 50 mm.
The PDA detector wavelength is from 190 nm to 600 nm and specific wavelengths are 210 nm and 220 nm.
During my first run I ran a CBD sample (CBD 50 ng/mL in acetonitrile). A mistake was made of not running a blank before the CBD sample :(
I ran several blanks after and still continue to see peaks.
I have remade the blanks many times in different vials, new solutions of blank.
I set up a run consisting of 75 blanks on a 2 minute "cleanout method" run where I was using 90% ACN for the full time. All the blanks in this batch had the same peak with a consistent intensity. (The intensity of the peak did not really decrease over the 75 injections).
I have also run Null injections and have gotten the same peak in those injections as well.
I switched mobile phases from ACN to methanol. When running the blanks with the methanol I still got the same peak. After reversing the column, running methanol through, then fixing the column I still got the same peak again.
First we had thought it was CBD that contaminated the instrument, but after switching the phases and running so many blanks we are not sure if this is something with the instrument that we should try to change/clean?
At this point I am not sure where this peak could be coming from and was looking for some advice/direction on what to try next!
I can provide additional information as needed!
Relevant answer
Answer
What is the retention time for the peak in acetonitrile? What is the retention time in methanol?
"Ghost" peaks come from contaminants. Null runs without injections showing the peak tends to rule out the injector as a source of the peak. The peak seems very reproducible.
One source of ghost peaks are contaminants in solvents. It could be the formic acid or water, you still saw the peak when you tried methanol which suggests the acetonitrile might not be the source.
If you post a chromatogram, it would be helpful.
  • asked a question related to Running
Question
3 answers
Hi, I am running blind docking and its my first time using Autodock. I have been following a tutorial and going step by step but I have now come across this issue and I cannot seem to understand why it keeps recurring. If anyone could help me figure this out that would be very much appreciated!
Thank you in advance.
Relevant answer
Answer
Cannot find gpf or grid parameter file indicates that the directory related to your grid parameter file is incorrect, i.e., you saved your .gpf file in wrong directory other than your working directory.
  • asked a question related to Running
Question
2 answers
He
Relevant answer
Answer
Otherwise, if your solute is really precious and you don't want to lose any product at all, take it out just before it's dry, put it in a tray or other wide, flat container that will be easy to collect solid off of, and finish the drying in a vacuum oven. https://labovens.net/collections/vacuum-ovens
  • asked a question related to Running
Question
3 answers
I have a problem with running my logistic regression. When I run my analysis, I get really strange values and I cannot find anywhere how I can fix it. I already changed my reference category and that led to less strange values but they are still there. Also, this only happens to two of my eight predictors. These two predictors have multiple levels/categories.
Can someone explain to me what's wrong and how I can fix it?
Relevant answer
Answer
You are using categorical independent variables. How could there be a logistic relationship???
  • asked a question related to Running
Question
10 answers
We have recently aquired plasmids with the following configuration:
EF1A>{gene of interest}:P2A:Bsd
When used for transfection (293FT cells and SH-SY5Y cells), these cells expressed the desired protein after 48 hrs (verified several times by western blot and viewing of GFP which is encoded in some of our plasmids). When the selection antibiotic is added, most cells survive, which is expected. However, after a few days, the cells no longer produce the desired protein (verified many times by western blot and viewing GFP).
To be sure, we always use a negative control for the antibiotics, cells which were not transfected, and they all died quickly (36 hrs at most).
Oddly enough, when used for lentiviral infection, there is no issue, and the cells continue expressing the protein even after a few weeks of antibiotic selection.
We have not run into this problem with other vectors acquired from other sources.
Thanks in advance
Relevant answer
Answer
That's true. We've been pondering it, it's good to hear someone from the outside recommending it.
Thanks
  • asked a question related to Running
Question
1 answer
The landing process of the AFM (NT-MDT) we use does not work. Although the computer program appears to be running, but it does not come close to your fingertips. We cannot observe any mechanical movement. Be more than happy to help with this.
Relevant answer
Answer
Is the screw on the z axis rotating during the approach? Is the sample sufficiently flat to approach the surface with no macroscopic asperities which may interrupt or block the piezo movement?
  • asked a question related to Running
Question
1 answer
kindly sir guide me about changing the cutt off energy.k-points or anyother value.please answer me i will be thankful to you.
Relevant answer
Answer
The cut-off energy depends on the pseudopotentials, so if you're using the same pseudopotentials you should use the same cut-off energy.
The k-points sample the reciprocal-space of your simulation cell. If you make a supercell, the real-space cell is increased in size by some multiple, let's call it X. The corresponding reciprocal-space cell as been *reduced* in size by the same factor X, and since you don't have as much reciprocal space to sample you don't need as many k-points; in fact, if you used to have N_k total k-points you now need only N_k/X (although that may not be an integer, so you have to think carefully about exactly what the appropriate sampling would be).
Hope that helps,
Phil Hasnip
(CASTEP developer)
  • asked a question related to Running
Question
2 answers
The issue with running the Rescale a Grid Solver Geometry File example on the DAMASK official website, Can not run on ' scaled = grid.scale(cells)' ,See Figure 1 below,
Error:output shape not correct
Relevant answer
Answer
it seems that you're using an older version of DAMASK. I recommend to update to the most recent version (3.0.0-beta)
  • asked a question related to Running
Question
6 answers
My background is primarily qualitative so I'm struggling to understand the best way to approach analysing my research. I'm conducting a survey with multiple IVs that will be split into four blocks, but also have multiple DVs (use of various related services) and plan to use Hierarchical Multiple Regression. The first analysis will just be use/non-use of any of the services so HMR would be fine. But how do I compare use of different services? Would I have to run individual HMR analysis for each DV or is there a better way to do this?
Relevant answer
Answer
It sounds like you could have some binary DVs (use versus nonuse) which would require logistic regression.
  • asked a question related to Running
Question
2 answers
I am studying the influence of 7 variables on a DV. 7 questionnaires has been adapted and one for the DV has to be established. So is it okay for me to run a CFA on all the questionnaires? or only EFA on the newly established scale and CFA for the rest? Ive read that EFA is only for newly established scale or do i proceed with actual data collection and then run CFA for the entire scales?
Relevant answer
Answer
If you already have hypotheses about the number of factors and the loading pattern (which variables measure which factors), EFA may not be needed and you can proceed straight to CFA.
  • asked a question related to Running
Question
2 answers
Hi all,
I want to run isolated molecules as one unit on gromacs. any idea how would i do that.
Relevant answer
Answer
Ayaz Anwar make a single pdb file for every combination and try using protein-only option.
  • asked a question related to Running
Question
2 answers
I have a protocol that requires me to use a centrifugation speed of 44,000 x g for 70 mins to generate cytoplasts from a human cell. However, I only have access to a ultracentrifuge rotor that goes up 42,200 x g.
Would it be possible for me to just run this ultracentrifuge at 42,200 x g for 70 mins or slightly longer to generate my cytoplasts?
Relevant answer
Answer
I'm always reluctant to run a rotor at its maximal rated speed, especially if I don't know how well it has been treated in the past or how old it is. Chances are the stated time and g-force in the protocol are more than sufficient, so spinning at 40,000 g for 80 minutes will probably achieve the desired result.
Of course, if different rotor and tube geometries are used, results will vary because of differences in k-factor, as Engelbert Buxbaum mentioned. People usually don't specify all the details of the centrifugation, such as the rotor used or the type of tube, so it may not be possible to compare k-factors.
  • asked a question related to Running
Question
4 answers
Hello,
I am running a UMAT and need to save about 300 SDV's as history output to plot them. But in the odb file I only get SDV1 to SDV100. I do not get any warning or error messages. I even tried to name all 300 SDV's individually in the inp file, but didn't work either. Anyone has faced the same problem or know how to address it?
Thanks
Relevant answer
Answer
Hi Babak
Can you tell me how you created "history variables" in UMAT? I assign my hsitory value to a "statev" in UMAT but ABAQUS outputs it as a "Field Output" and not a "History Output".
Thanks
  • asked a question related to Running
Question
3 answers
Hi everyone. I am actually very new to protein computational so I have zero idea how do I install AlphaFold 2 to be run on my computer.
I heard that if you install AlphaFold + all required dependencies from source code, it would be significantly faster to run compared when you using a (pre-built) container.
I don't really have any idea on how those two works. If anyone could help me with this, I would be really grateful.
Does any of you know how to do this?
Relevant answer
Answer
You don't need to install on your computer. The easiest option would be ColabFold.
  • asked a question related to Running
Question
1 answer
hello,
I would like to add certain faces to a group. The process will run automatically, so i write a script. 
But it doesn't work, because Space Claim names the faces randomly.
For example, this function adds the wanted faces in a group  "groupx1.append (GetRootPart (). Bodies [j + 2] .Faces [3]).
However, it does not always work. See Appendix.
Can anyone help me?
Relevant answer
Answer
Do you have a good solution yet, I'm having the same problem
  • asked a question related to Running
Question
3 answers
details:
MIPS USB Cameras provide a quick and easy means of displaying and capturing high-quality video and images on any USB 2.0-equipped desktop or laptop computer running a supported Microsoft® OS.
please send me.
thanks
Karthick
Relevant answer
Answer
plz download and use
  • asked a question related to Running
Question
2 answers
I want to run a meta-analysis for linkage mapping and GWAS QTLs and I will require a software to be able to achieve this.
Relevant answer
Answer
Thanks so much, AbdallFatah
  • asked a question related to Running
Question
16 answers
I am attempting to perform an EMSA with a transcription factor and its wildtype binding sequence but the first attempt showed that the protein never left the well. After some research, I have discovered that the theoretical pI of the protein is approximately 8.8 and my running buffer is 8.3.
What is the best way to run an EMSA for this protein? I am worried that if I change the loading buffer pH that the protein:DNA binding might be affected. Can I just adjust the pH of the running buffer to 1 point above the protein's pI (e.g. 9.8) with NaOH? Do I need to adjust the pH of the gel as well?
Relevant answer
Answer
Thanks Freyda Lenihan-Geels Raja Singh I ran the complex in TBE buffer with a pH of 7.8, and the gels were made with a pH of 7.8. I reversed the leads. As a result, the complex has moved, but the free DNA is difficult (the bands are not crips yet but this gave me sense that complex is shifting)
  • asked a question related to Running
Question
1 answer
I'm running RNA extracted from Saccharomyces cerevisiae in a Tapestation 4200. The RINe value is excellent, but the sizes of the 18S/28S bands are lower than expected: ~1000 and ~1800 instead of ~2000 and ~4000, respectively. The internal control (lower marker, 25nt) in each sample ran as expected. Yeast don't seem to have a hidden break in the rRNA.
Has anyone experienced a similar problem?
Relevant answer
Answer
Hi Shon, you're right. The 28/18 ratio is great, but the positions seem to be off. Are you sure the lather you used was not for DNA by any chance?
Plus, may I ask what was your purification method? the product is really clean.
  • asked a question related to Running
Question
2 answers
Hello all,
I had a question about MD, which I would appreciate if you could answer. I want to run MD without solvation. please suggest any idea.
Relevant answer
Yes, it is possible. That kind of simulation is called "implicit solvent molecular dynamics simulation". This simulation is notably much faster than the explicit one, albeit at the expense of reliability since the simulation doesn't reflect the real biological environment, as previously mentioned by Arnab Mukherjee .
To perform this, ordinary MD simulation software like GROMACS can be used and modified accordingly. Alternatively, commercial software such as MOE can also be employed for this purpose.
  • asked a question related to Running
Question
1 answer
I've been learning ultracapacitors recently and am particularly curious about the short-circuit behavior of these cells. I'm aware that it's not a recommended procedure, but my intention is to conduct it at very low voltages, for instance, at 0.1V. Due to current lab limitations, I'm unable to run experiments at the moment, but the specific cell I'm studying is a Maxwell 325F capacitor, featuring approximately 1.8mOhm ESR for a fresh cell.
I have a few questions:
  1. Is it considered safe to perform a short-circuit test at 0.1V?
  2. How long would it take for the cell to stabilize around 0V or <1mV when the circuit is shorted?
  3. What can be expected in terms of the cell surface temperature during this test?
  4. When the short-circuit wire is removed, will the cell voltage jump back?
I would greatly appreciate any feedback or insights you can provide.
Relevant answer
Answer
Dongliang,
1: If you're not put off by kiloamp currents, I can't see a reason to be troubled by short-circuiting that component. Note the peak tolerable current - much higher and I imagine that damage to the plates or terminals could occur.
2: It depends on how much charge is in it. You know the capacitance, so you know the total charge at a given voltage. If you allow current to flow, you can write down an expression for the charge (and thus voltage) remaining.
3: The surface of what? The capacitor is a composite device - and the plates won't be thermally well-coupled to the outer skin of the device. The terminals are better connected (!) to the plates, thermally.
4: It depends on the chemistry of the electrolyte.
  • asked a question related to Running
Question
6 answers
Dear All,
Could you tell me how to run MD simulation by any software. I have DS, VMD in hand. But I try to run 30 nsec in DS that spent 6 days, is that right? The reviewer asked me to perform this procedure in 100 nsec, however, my computer can not complete this process. Could you let me know how set up all parameters and settings to finish them. Many thanks for your help with my deep heart.
Relevant answer
Answer
I usually use the Yasara-Structure application to carry out molecular dynamics simulations up to 100 ns, which only takes up to 6 days. Additionally, I use a PC with 16 cores
  • asked a question related to Running
Question
2 answers
I am doing LC-MS (LC-QToF) of a phosphonate compound (nitrilotris methylene phosphonic acid), for which I add derivatization agent trimethylsilyl diazomethane to each sample and wait 2hours before running them on the instrument. I do not have any mass label internal standard for the compound, so I am using caffeine as the internal standard. I am using C18 column. In the beginning I was getting disturbed peaks with a tail from the start of the runs, then for two weeks I got clean peaks. However, I started getting the same type of disturbed peaks again. What could cause the disturbed peaks in LCMS? I have attached a screenshot of the disturbed peak I am getting. My internal standard peak also looks like this.
Relevant answer
Answer
Hello, a bit more information would be useful to help diagnose the problem but just looking at the fronting peak shape my guess would be a sample solvent that has an eluotropic strength that is too high combined with an injection volume that is also too high. Try injecting a lower volume or matching the injection solvent with your starting gradient conditions.
  • asked a question related to Running
Question
1 answer
I am trying to run coupled flow-deformation analysis for slope subjected to rainfall infiltration with multiple time duration (i.e 1h, 3h, 6h, 12h, 24h & 36h).
For initial steps, I put maximum number of steps as 10000 to complete analysis, but after completing 3 stages, it gives me error that "Prescribed Ultimate time not reached". When I try to increase number of steps more than 10000, software doesn't allow me to do this.
Is there any Solution for this????
Relevant answer
Answer
I am facing the same issue, did you find a solution?
  • asked a question related to Running
Question
3 answers
Could artificial run own simulation by assuming initial condition as per convenient way
Relevant answer
Answer
Artificial Intelligence (AI) has revolutionized the research world by enabling efficient data analysis, automating repetitive tasks, and facilitating predictive modeling across various disciplines. Researchers can extract insights from large datasets, accelerate drug discovery, and develop personalized healthcare solutions through machine learning and deep learning techniques. Natural Language Processing (NLP) tools aid in extracting information from textual data, while AI-driven technologies contribute to environmental monitoring and conservation efforts. Overall, AI has empowered researchers to tackle complex challenges, make groundbreaking discoveries, and drive innovation in research at an unprecedented pace.
  • asked a question related to Running
Question
1 answer
I'm trying to run a model in AMOS and I get this error but I don't know what I can do about it.
Relevant answer
Answer
This (bifactor?) model is not identified because the specific (D1 through D5) factors (1) each have only 2 indicators and (2) are not correlated with other variables in the model.
Fixing both loadings on the D (specific) factors to 1 may help, but the model may still show problems for other reasons. Bifactor models in which there is a specific factor for each facet often have problems. See
Eid, M., Geiser, C., Koch, T., & Heene, M. (2017). Anomalous results in G-factor models: Explanations and alternatives. Psychological Methods, 22(3), 541–562. https://doi.org/10.1037/met0000083
  • asked a question related to Running
Question
1 answer
Hi,
I am running proteomic analysis on Salmonella, however (for saving resources sake) I would like to know if I can run the analysis on a bacterial cocktail of three strains of the same serovar.
Relevant answer
Answer
If the treatment conditions for three strains of the same serovar are consistent, running the analysis on the bacterial cocktail can be accepted.
  • asked a question related to Running
Question
1 answer
I started running SWAT-CUP with 100 simulations but the error message shows like in the attached screenshot that the output files does not exist in the directory path even after the calibration was run successfully.
Relevant answer
Answer
How did you solve it? I'm having the same problem
  • asked a question related to Running
Question
4 answers
I think SPSS made my computer slow. Be careful when run in your computer.
Relevant answer
Answer
R + emacs
  • asked a question related to Running
Question
8 answers
I am trying to distinguish between 393 and 374 band lengths by RFLP method. I am running in 3.5% agarose gel in 25x15cm gel electrophoresis to visualize the enzyme cut. I could not distinguish the bands with 60 minutes of running at 120 volts. When I ran for another 25 minutes, the bands faded, but the marker was visible. If I run it directly for 85 minutes, no bands including marker are visible in the gel. I also tried post staining method with ethidium bromide, but it didn't work. Any suggestions? Thank you...
Relevant answer
Answer
Metaphor gel 4% will solve your problems.
  • asked a question related to Running
Question
1 answer
I need to calculation the single point of a species, and I submitted a series of tasks, most of them worked, but a few of them failed. and the error code was like:
(Everything was normal until this step) Localizing the valence orbitals [file orca_mdci/mdci_util.cpp, line 960]: Error (ORCA_MDCI): Cannot open GBW file: //w0/tmp/slurm_xxxx.xxxx/xxxx/example15.loc ORCA finished by error termination in MDCI Calling Command: /cvmfs/restricted.hpc.rwth.de/Linux/RH8/x86_64/ISV/ORCA/5.0.4-gompi-2022a/bin/orca_mdci //w0/tmp/slurm_brxxxx.xxxx/xxxx/example15.mdciinp.tmp [file orca_tools/qcmsg.cpp, line 465]: .... aborting the run i have tried some solutions found on this website, such as submitting with a single core, adjusting memory, allowing RHF, and switching to SOSCF when SCF doesn't converge, but none have been effective. so now i am hoping for assistance from experienced users and express my sincere gratitude. (Additionally, they inquire whether there might be some unstable or 'wired' species preventing the calculation of sp or other computations. Hence, regardless of the adjustments made, no results are obtained in any situation.? 📷
Relevant answer
Answer
More information is needed to determine the problem. Attach input and output files.
  • asked a question related to Running
Question
2 answers
When the wifi is on the job in abaqus is aborted
Relevant answer
Answer
hi dear please check your license of abaqus. you just do without wifi and internet. ok?
  • asked a question related to Running
Question
2 answers
For my project I have created an intervention that involves educating patients about a topic. I make them do a survey before, put up my intervention, then do the same survey after. I want to compare the before and after scores to see if my intervention made a significant difference. My problem is that, while my intervention has been running on the ward, most but not all of the patients have changed. What statistical test can I use?
(This is the first time I am using a statistical test on my own data, sorry if this is silly/ obvious)
Relevant answer
Answer
If you want to conclude anything scientifically meaningful, you need to have a control group with a similar intervention but that should not have any effect on the outcome. You must compare the change observed in the intervention group to the change observed in the control group. Without the control group the change you observe may be attributable to anything (including, but not exclusively to, your intervention). By comparing the change between control and intervention group, the effects of everything but of the interventions should cancel out.
Another critical point in such studies is the sampling. It's next to impossible to get a real random sample. Some patients are willing to participate, others are not. If the selection (for whatever reason) is linked, directly or indirectly, to the outcome, then your result can be generalized only to people how would participate. This may severely restrict the value of your conclusions, particularly when you don't know the reasons why patients don't participate.
I strongly suggest to contact a (bio-)statistician to help you plan the study (and to analyze it). Otherwise you take a high risk of producing worthless data. Unless your supervisor has some contact details, this may be a good start to ask for help: https://www.imperial.nhs.uk/research/research-facilities/national-institute-for-health-and-care-research-imperial-clinical-research-facility
  • asked a question related to Running
Question
3 answers
I am running a survey regarding metro-area network architecture and have some analytics to share through this link: https://docs.google.com/forms/d/1kJnEjukNDGC4JARuhgBI0HUUrQ8UNA-G71Aa6heaK8U/viewanalytics
The link will be active today only, until 8 pm CET.
Cheers,
Etienne
Relevant answer
Answer
I'd like to add that I've published a preprint that digests and analyzes the data.
  • asked a question related to Running
Question
1 answer
hi
I start working on fog network for this purpose I want to use iFogSim simulation toolkit. but I am facing a lot of errors while I am running this code on eclipse. anyone who help me to figure this out?
thanks
Relevant answer
Answer
same problem , I'm getting errors
anyone could help please ?
  • asked a question related to Running
Question
2 answers
Hello RG Community:), towards the above topic:
I had entered all of the appropriate files within the ffTK Opt. Charge Tab, however,
am generating the below errors upon Run Optimization (just the first error is included
to facilitate):
Atom name: C1 not found in molecule
Attached is the INPUT PSF and PDB Files and QM Target Data first output file: output14C+H-ACC-C1.out
I believe the molecule is referencing the pdb file which indicates the C1 atom.
Please let me know if you would need to inspect the INPUT par files.
Thanks if you know:),
Joel 🚀
Relevant answer
Answer
Hi RG Community the answer to my above inquiry below via Josh. His suggestion together with me amending my residue name to just the 3-characters 14C within both my psf and pdb files while renaming these files accordingly seems to allow the program to function, thanks gerardr as well:) Hi Joel, How did you generate your psf file? All the atom names are “X”, which is weird, but I think the real reason it is bailing is your spacing. You’ve told VMD that you have an EXT psf , so that each field has a specific width. In the EXT specification, its supposed to be 8 characters + 1 space for many fields, but the space between your resname (14C) and your atom name (N1) isn’t that wide. The most expedient thing to do is probably just to tell VMD that you have a “NAMD” formatted psf, which tells VMD to use space-delimited file reading, rather than fixed widths. So instead of “PSF EXT” at the top, you’d want “PSF NAMD” -Josh
Top
  • asked a question related to Running
Question
8 answers
I am running a protein ligand complex simulation using Gromacs 2021 on a Windows Subsystem for Linux. My laptop has Nvidia GeForce RTX 3050 GPU. When i run the simulation of Lysozyme tutorial (as given in GROMACS tutorial) for 100ns, the expected finish time is showing approximately 1 week. I looked at the topology file to get an understanding of system size and found that my total system size is approximately 47500 including Solvent, ions, protein and ligand.
1) The "dt" defined in the mdp file is 2 fs and the number of steps (nsteps) is 50000000. I wanted to know if there is a way to speed up the process or is this the natural computation time that RTX 3050 provides? I went through other queries about the same issue and also I have worked with RTX 3080 Ti which completes a 100 ns simulation in approximately 30 - 40 min. So I assume that since 3050 also belongs to a similar class/family as that of 3080 Ti, it should atleast provide a better simulation timing (say 100 ns in 1-2 hours). I might be wrong about the technical aspect of GPU computation. Any help in this matter will be much appreciated.
2) Also, I wanted to know, since I am running these simulation in Windows Subsystem for Linux (WSL2), does that affect the computation speed of the GPU when MD Simulations are run using GROMACS?
I would appreciate if someone can help me out in this regard.
Thanks
Satyam
Relevant answer
Answer
I am using Amber in WSL with RTX 3050 mobile. I am getting above 50 ns/day speed for MD calculations. Applying a small tweak to the clock speed of the GPU via overclocking has helped me increase the calculation speed up to 56 ns/day.
  • asked a question related to Running
Question
4 answers
I am running a regression in eviews that uses one main x variable and includes AR and MA terms. When I run the Variance Inflation Factor test, high VIF values are produced the AR, MA, and lag terms as well as the main x variable. Are these high VIFs problematic? Although the AR, MA, and lag terms are listed with the x-variables, I don't think they should be treated like them when we talk about VIF. Wouldn't we expect high correlation between the ARMA/lag terms and the x variable? Isn't that the whole point of them? What am I missing here?
Relevant answer
Answer
Sure, the Variance Inflation Factor (VIF) does matter in linear regression analysis, even between lagged terms.
  • asked a question related to Running
Question
3 answers
I am considering running an MD simulation of a protein with varying ligand concentrations. I think replica exchange molecular dynamics should be able to do that, but I have no idea how that is done.
Is it possible to get a tutorial or possibly other methods for running such a simulation?
Relevant answer
Hello,
If in each of the replicas, you have different particle numbers (# of ligands) and you plan to use that variable as an exchange variable in the replica exchange algorithm, then I think it would not work in the ensemble that usually people use in MD (NPT).
You have to simulate your system in the grand canonical ensemble (uPT) and control the exchange probability of your ligands with the ligand reservoir to keep your chemical potential fixed.
I think this is not so simple and I don't remember a package that simulates such an ensemble in a simple/direct way.
  • asked a question related to Running
Question
1 answer
This is a first run in our lab and there are several variables that each add several days to the protocol. I will be using the DeepLabel Antibody Staining Kit from LogosBio to label c-Fos. Questions are:
Which primary antibody do you use? At what concentration? How long do you incubate?
What concentration of secondary antibody do you use? How long is this incubation?
If anyone would share experience/protocol that would be great! We have been clearing and imaging brain and other peripheral tissues as well so if we can be of any assistance, please feel free to contact me.
Relevant answer
Answer
In their latest publication in STAR protocols (PMID 38280198), Refaeli et al. published a detailed protocol for marking engram cells using a modified version of CLARITY in combination with a rabbit monoclonal c-Fos antibody (Synaptic Systems #226 008 [https://www.sysy.com/product/226008]).
  • asked a question related to Running
Question
1 answer
I have been trying to figure out how to run GDC using Bio-logic software but still not able to figure it out. Can someone please point me in the right direction to run this experiment?
Relevant answer
Answer
Ah, diving into the world of Galvanostatic Charge and Discharge (GDC) for supercapacitors, are we? It's an intriguing realm indeed, but fear not, I shall shed some light on the path ahead.
Firstly, kudos for choosing Bio-Logic software for your endeavor. It's a robust tool for electrochemical experiments like GDC. Now, let's get down to business.
To initiate a successful GDC experiment, you'll need to follow a meticulous process:
1. **Setup Preparation**: Ensure your experimental setup is primed and ready. This includes connecting all necessary equipment, such as the potentiostat and electrochemical cell, and verifying proper connections.
2. **Software Configuration**: Launch the Bio-Logic software and configure the settings for your GDC experiment. Pay close attention to parameters such as current range, voltage limits, and sampling intervals. These parameters dictate the course of your experiment.
3. **Electrode Conditioning**: Prior to the actual GDC cycles, it's imperative to condition your electrodes. This involves pre-treatments like cyclic voltammetry to stabilize the electrode-electrolyte interface.
4. **Initiating GDC**: With everything set up and configured, you're ready to commence the GDC cycles. Specify the desired charge/discharge current and voltage limits, and let the software take charge (pun intended).
5. **Data Collection and Analysis**: As the experiment progresses, the Bio-Logic software will dutifully record voltage and current data. Once complete, analyze the collected data to extract insights into the supercapacitor's performance, such as capacitance and energy storage capabilities.
6. **Iterative Refinement**: Experimentation is a journey of refinement. Don't hesitate to iterate on your setup and parameters based on initial results. This iterative approach is key to unlocking deeper understandings and optimizing performance.
One interesting reading:
Remember, patience and precision are your allies in the realm of electrochemical experimentation. Keep tinkering, keep refining, and soon you'll be unraveling the mysteries of supercapacitors with finesse.
  • asked a question related to Running
Question
1 answer
I'm doing several steps, such as minimization and equilibration, to start running MD simulations and I'm trying to automate this process by running one step after finishing the other. When I go between different steps, I need to provide a PDB file from the last frame of DCD. Is there any way to tell it to write this PDB file within the NAMD configuration file? I've been doing it manually by loading DCD then PSF in VMD and saving the last frame as PDB, which is not ideal for automation.
Relevant answer
Answer
Right after asking this question, I checked out the commands on this link and tested some out:
I was able to write PDB instead of a binary file by adding this line in the first configuration file:
binaryoutput no
Then, giving .coor file as the PDB file to the second configuration file:
coordinates m.coor
  • asked a question related to Running
Question
2 answers
I am intresting in individual costings for education hygiene programs in pre-schools.
Relevant answer
Os custos de executar um programa educativo sobre a lavagem das mãos em uma pré-escola podem variar significativamente dependendo de diversos fatores, incluindo a extensão do programa, materiais utilizados, recursos humanos envolvidos e a escala da pré-escola. Alguns dos custos a serem considerados incluem:
1. Materiais Educacionais:
- Cartazes, folhetos, vídeos ou qualquer material de apoio para ensinar as crianças sobre a importância da lavagem das mãos.
2. Recursos Humanos:
- Salários para educadores ou profissionais de saúde envolvidos na implementação do programa.
3. Treinamento:
- Custos associados ao treinamento de professores, funcionários e até mesmo alunos em práticas adequadas de lavagem das mãos.
4. Produtos para Higiene:
- Sabonetes, toalhas de papel, dispensadores de sabão, entre outros itens.
5. Eventuais Atividades Especiais:
- Custo de eventos especiais ou atividades relacionadas, como palestras de profissionais de saúde ou demonstrações práticas.
6. Avaliação e Monitoramento:
- Recursos necessários para avaliar a eficácia do programa ao longo do tempo.
7. Custos Administrativos:
- Custos gerais associados à administração do programa, como coordenação e supervisão.
Para obter uma estimativa mais precisa, é recomendável entrar em contato com fornecedores locais, especialistas em educação infantil e profissionais de saúde pública. Eles podem oferecer orientações específicas para a sua região e ajudar a personalizar o programa de acordo com as necessidades da pré-escola em questão.
Além disso, explorar parcerias com organizações locais de saúde, empresas ou agências governamentais pode ser uma estratégia para obter suporte financeiro ou material para o programa educativo.
  • asked a question related to Running
Question
1 answer
1. My research involved 10 explanatory variables. After performing the CIPS panel unit root test, I found 4 variables stationary at level 1, 3 variables stationary at level I (1), and 2 variables stationary at level I (2). What should I do next? Do I perform a cointegration test?
2. I run both the Westerlund cointegration and Pedroni cointegration tests in Stata and EViews, but Stata shows "No more than six covariates can be specified." and with Eviews I can't run the test with more than 7 variables. Then what should I do?
Relevant answer
Answer
Since your dependent variable (CO2) is integrating at level (1) and all other variables are have a mixed order of integration, it is not advisable to use Pedroni and Westerlund cointegration tests. Because these two cointegration tests can only be used when DV is integrating at 1st difference and all other independent variables may be of I(0) or I(1) order. So, you can go for PMG-ARDL which is applicable in your case.
Second thing is that, some of your variables are integrating at 2nd difference. So, it better to drop those variables and use only use those variables that are integrating at most at I(1). You can refer to this article
  • asked a question related to Running
Question
5 answers
I have purified overexpressed protein from BL21 (DE3) cells using Ni-NTA column. When we run purified protein through native PAGE and SDS PAGE both which showed different result. In SDS page showed only one band of purified protein whereas two band in native PAGE. I have proceed whole experiment three times and found same results. What is the possibility to find two band in native page whereas it one in sds page. I have attached native PAGE image.
Relevant answer
Answer
Shweta Rai Hii, shweta. I also want to run my purified protein (by Ni-NTA coulmn) in Native Page. can u share the details. I ran 6% resolving gel at 40V at 4 degree. bu could not see any band. Actually size of my protein is 50Kd but on sds-page I could see band around 75 KD (repeated thrice). so I want to run the purified protein in native page.
  • asked a question related to Running
Question
1 answer
I am doing md simulation of HDAC11 protein with prospective ligands. I have completed molecular dynamics production run. But whenever I am doing MMPBSA.py assay, it is showing following error:
Loading and checking parameter files for compatibility...
cpptraj found! Using /home/mohon/anaconda3/envs/amber/bin/cpptraj
mmpbsa_py_energy found! Using /home/mohon/anaconda3/envs/amber/bin/mmpbsa_py_energy
Preparing trajectories for simulation...
100 frames were processed by cpptraj for use in calculation.
Running calculations on normal system...
Beginning GB calculations with /home/mohon/anaconda3/envs/amber/bin/mmpbsa_py_energy
  calculating complex contribution...
  calculating receptor contribution...
  calculating ligand contribution...
Beginning PB calculations with /home/mohon/anaconda3/envs/amber/bin/mmpbsa_py_energy
  calculating complex contribution...
  File "/home/mohon/amber22/bin/MMPBSA.py", line 100, in <module>
    app.run_mmpbsa()
  File "/home/mohon/amber22/lib/python3.11/site-packages/MMPBSA_mods/main.py", line 224, in run_mmpbsa
    self.calc_list.run(rank, self.stdout)
  File "/home/mohon/amber22/lib/python3.11/site-packages/MMPBSA_mods/calculation.py", line 82, in run
    calc.run(rank, stdout=stdout, stderr=stderr)
  File "/home/mohon/amber22/lib/python3.11/site-packages/MMPBSA_mods/calculation.py", line 472, in run
    error_list = [s.strip() for s in out.split('\n')
                                    ^^^^^^^^^^^^^^^
TypeError: a bytes-like object is required, not 'str'
Fatal Error!
All files have been retained for your error investigation:
You should begin by examining the output files of the first failed calculation.
Consult the "Temporary Files" subsection of the MMPBSA.py chapter in the
manual for file naming conventions.
My input file configuration is given bellow:
Input file for running PB and GB
&general
   endframe=1000, verbose=2,
#  entropy=1,
/
&gb
  igb=2, saltcon=0.100
/
&pb
  istrng=0.100,
/Can any one help me with it ?
Relevant answer
Answer
Hello! I have no expertise in this field
  • asked a question related to Running
Question
1 answer
Hi, I am trying to create a script for a particle that impact on a substrate. I want to run multiple simulations with varying particle sizes, that would run automatically. How would I go about accomplishing this?
Relevant answer
Answer
Hello! I have no expertise in this field
  • asked a question related to Running
Question
2 answers
Good day everyone,
I would like some help in getting online references to answer the question I have posed. I am dithering and running from pillar to post to find answers. Many thanks.
Regards
Vijaykumar
Relevant answer
Answer
Hello! I have no expertise in this field
  • asked a question related to Running
Question
3 answers
I am working a system of 40 atoms (Hafnium Selenide,monoclinic structure). When I try run the command "mpirun -np 8 VASP " , process terminates with this error. I have 8 GB RAM and I tried all possible combinations of NCORE and KPAR but nothing worked. What can I do now? This structure does not have any periodic atoms. Can I use ISYM=0 ? Will it help solving the issue ?
Relevant answer
Answer
For production running, 8 core and 8 GB is fairly not enough.
If you just wanna learning, chose a small system less than 10 atoms with small kpoints and encut.
  • asked a question related to Running
Question
6 answers
Hello,
So I have been struggling to get a successful western blot using MIN6 derived EVs, and it has been a real struggle.
Everytime I isolate my EVs, and after lysing them, I run the proteins in the gel and see not band at all. I use coomassie blue or the stainfree precast gels to check the run.
The ladder shows up fine though.
After lysing my EVs, I measure the protein amount using microBCA, and when I diluted the sample 1/2 I got a final concentration of around 200ug/ml, and when I diluted it 1/10 I got a final concentration of 400ug/ml. This is already weird but I still loaded my sample assuming I had 200ug/ml to be on the safe side, and I used 4x laemmli in order to avoid unecessary dilutions, and using my calculations I should have loaded around 15ug/ml. But the imaging showed no protein at all, and now I am really puzzled. (the first band is the ladder, and I am supposed to see two bands on the left side of it).
Brifely here are the steps i followed:
-Collect media from min6 cells (150ml)
-Centrifuge 500g/10min, collect supernatant, centrifuge 2000g for 20min, collect supernatant,ultracentrifuge 120000g for 90min(4C), keep the pellet and wash with pbs 150000g for 70min(4C).
-Finally I diluted the pellet in 500ul of PBS and store at -80C.
Lysis:
-Take 100ul of my ev sample, put it in a 10k column, centrifuge at 1400g/15min at 4C, add 500ul of 1XRIPA to my concentrate, spin 14000g/15min 4c. Put the column upside down in the tube and spin 2000g/2min. Add 70ul of RIPA and incubated on ice for 15min, then spin again 14000g/15min 4c and collect supernatant, and put on ice until further use.
Gel:
I use stain free anyKd precast gels with 50ul wells, and I use for the prep solution :4x leammli(900ul)+b-mercaptoethanol(100ul), because I am looking for tsg101 antibody. I then mix 1/8th of prep solution with 7/8th of my lysed sample. Heat up at 70C 10min, and load the sample in the well, run at 120v/1h.
I don't know what went wrong.
I trying once running the gel with unlysed EVs, and I got a faint band when I did coomassie blue, so maybe the lysis is wrong, or the initial amount of conditioned media is too low, as I saw some people starting with 1-2liters.
I would be grateful if anyone can help.
Thanks
Relevant answer
Answer
Sarah Boucenna in our lab we had Abcam 10KD columns (https://www.abcam.com/products/sample-preparation-kits/10kd-spin-column-ab93349.html) it says deproteinizes, also other vendor see attachment. We only use them for low size molecules and when doing mass spec. My extraction was RIPA buffer and yes we did westerns and they worked well for microsomes and EVs.
All in all you could be misreading protein conc as you thought. Good luck
  • asked a question related to Running
Question
1 answer
Hello,
I am using Pophelper in R to run the algorithm implemented in CLUMPP for label switching and to create the barplots for the different K (instead of DISTRUCT).
I am getting a warning message when I merge all the runs from the same K using the function mergeQ() from the package which is slightly bothering me. Can anyone help me with this?
The warning message is as follows...
In xtfrm.data.frame(x) : cannot xtfrm data frames
Thanks,
Giulia
Relevant answer
Answer
Have you found a solution already?