Science topics: Computer Science and EngineeringCoding
Science topic
Coding - Science topic
Explore the latest questions and answers in Coding, and find Coding experts.
Questions related to Coding
I want a bifurcation diagram of two ode and the parameter tau. The existing code in matlab are attached.
This is a code block from nutWallFunction library in OpenFOAM where in, effective kinematic viscosity ($\nut_w$) at the wall is calculated using resolved field(in case of LES)/ mean field(in case of RANS) and $y^+_p$ (wall normal distance of the first cell center). this allows to set a new viscosity value as boundary condition at the wall using log law. Considering the first cell center is in the logarithmic layer of the universal velocity profile.
Now, in this code block of member function defined as nutUWallFunctionFvPatchScalarField::calcYPlus()
There has been iterations done for the yPlus value to reach convergence with maximum of 10 iterations. Why are these iterations needed? and why is the maximum number of iterations 10. I have given a reference of the code below;
tmp<scalarField> nutUWallFunctionFvPatchScalarField::calcYPlus
(
const scalarField& magUp
) const
{
const label patchi = patch().index();
const turbulenceModel& turbModel = db().lookupObject<turbulenceModel>
(
IOobject::groupName
(
turbulenceModel::propertiesName,
internalField().group()
)
);
const scalarField& y = turbModel.y()[patchi];
const tmp<scalarField> tnuw = turbModel.nu(patchi);
const scalarField& nuw = tnuw();
tmp<scalarField> tyPlus(new scalarField(patch().size(), 0.0));
scalarField& yPlus = tyPlus.ref();
forAll(yPlus, facei)
{
scalar kappaRe = kappa_*magUp[facei]*y[facei]/nuw[facei];
scalar yp = yPlusLam_;
scalar ryPlusLam = 1.0/yp;
int iter = 0;
scalar yPlusLast = 0.0;
do
{
yPlusLast = yp;
yp = (kappaRe + yp)/(1.0 + log(E_*yp));
} while (mag(ryPlusLam*(yp - yPlusLast)) > 0.01 && ++iter < 10 );
yPlus[facei] = max(0.0, yp);
}
return tyPlus;
}
My doubt is concerning the do-while loop at the end for yPlus iteration.
A human can only aspire to fluency in so many different languages before mixing up words due to code switching. Thus, MAYBE those who cannot learn so many languages turn to linguistics and coding to earn money.
How big is the difference between what is written in many AI papers and its real code?
In experimental design, when the aim is to find an objective function or response as a function of variables, the real variables are converted to coded variables. For example, if the real variables of temperature are 30, 40, and 50 oC. The coded one can be obtained using the following equation: X=(T-Tcenter)/(Thigh-Tlow). So the coded temperatures are -0.5, 0, and 0.5.
RENT @Annuities #Global Markets ^Ownerhip *Roles %Rules (Privacy:Transparency)
I have tried the following steps, but these are giving the errors
1. !pip install pymatgen
2. !pip install mp_api
2. from mp_api.client import MPRester
with MPRester("my_api_key") as mpr:
docs = mpr.summary.search(material_ids=["mp-149", "mp-13", "mp-22526"])
And got a very long error as a result.
What am I doing wrong?
The Ricci tensor assumes the role of helping us understand curvature. Within my Universal Theory research, the Ricci tensor unveils itself. I was pleased to find as detailed in my research document on the Grand Unified Theory Framework (of which advancements in technology are showing there may be more than one viable form of as science progresses)that the Ricci Tensor was typically vanishing to zero in relation to the schwarzschild metric as it should back when I was performing feasibility and speciousness checks via calculations with other experts and myself. But in practical applications of the Grand Unified Theory Framework, vanishing to zero unravels very intriguing consequences.
One of said consequences was something small and interesting I wanted to discuss. The purpose is to highlight the intricacies of implementing such a highly comprehensive concepts in practical settings such as code. To thus detail the challenges researchers may face when translating comprehensive physics and mathematics formulations into concrete applications. More often than not I have found it requiring innovative adaptations and problem-solving. I also want to hear if anyone has any experience with similar things and what their experience was.
My recent amd past ventures into authenticating the Universal Theory framework in code but also writing complex neural networking and AI code with it, as well as Quantum computing code had a lot of interesting hurdles. I immersed myself in the depths of this then encountered a peculiar happenstance. The vanishing of the Ricci tensor to zero in the code procceses. I didn't realize why a lot of the code wasn't working. It's because I was trying to run iterative artificial learning code. And since it incorporated the Universal Theory, and did so in a mathematically accurate way (also authenticating it in various ways via code this way is possible) I didn't realize that no matter what I did the code would never work with the full form of the theory, because the Ricci tensor would always vanish to zero in terms of the schwarzschild metric within the subsequent processes running off initial code. And while this was validating for my theory it was equally frustrating to realize it may be a massive hurdle to institutingnit in code.
This unexpected twist threw me into a world where certain possibilities seemed to evaporate into the ether. The task of setting values for the tensor g_ij (the einstein tensor form utilized in the Grand Unified Theory Framework) in code had to demand a lot of intricate modifications.
I found myself utterly lost. I thought the code was specious. Before I thought to check the ricci tensor calculations, Christoffel and Riemann formations and got it running. I think it's quite scary in a way that someone could have similar code with my own or another form of Unified Theory but if they didn't have THAT sufficient of knowledge on relativity, they may never know the code worked. I feel few have attempted to embrace the tangible variations of complex frameworks within code. I wanted to share this because I thought it was interesting as an example of multidisciplinary science. Coding and physics together is always interesting and there isn't a whole lot of support or information for people venturing into these waters sometimes.
I would like to know what everyone thinks of multidisciplinary issues such as this as well, wherein one may entirely miss valuable data by not knowing what to look for, and how that may affect final results and calculations of research and experimentation. In this situation, ultimately I had to employ some of the concepts in my research document to arrive at the Ricci tensor without any formations of Christoffel or Riemann symbols in the subsequent processes. I thought that was interesting from a physics and coding perspective too. Because I never would've know how to parse this code to get it functioning without knowledge of relativity.
I've made a simple simulation on Simulink consisting on a PV panel, dc-dc converter, resistive load and a function block acting as a MPPT running the code for grey wolf algortihm
When I increase the solar radiance, the current from the panel (2nd graph) starts fluctuating. When i decrease the the solar radiance, the voltage from the panel (4th graph) starts to fluctuate. Further more, if the change the value of the load resistor to 100 ohms from 10 ohms, both the current and voltage goes crazy.
I have no idea what is causing this and was hoping someone could point out my mistake. I have attached the pictures below along with the simulink file. I am willing to provide more information if needed.
I am curious about the technology behind plant identifier apps. How are they coded? Is there already an app that can differentiate all the species in one frame? For example, it can id each species in a bouquet of flowers?
Has anyone got suggestions for types of qualitative analysis that combine inductive and deductive approaches? For a project I'm working on, we identified a relevant pre-existing framework about accessibility maturity with 5 key areas. We've completed semi-structured interviews about strengths and barriers in these 5 areas, and are now analysing the data. I'd like to focus on findings from each of these five areas as the main sections in our report, but also code inductively within each of them to discuss the emergent patterns we've found. Any suggestions for an analysis approach that might suit? I've looked at thematic analysis, content analysis, and framework analysis but none of these seem to fit as we're beginning with a pre-existing framework then coding inductively within it. Thanks for your suggestions!
Hello, I'm using the Piecewise Linear Function in origin to fit my data, which should contain 9 segments. According to the method from originlab http://www.originlab.com/doc/Tutorials/Fitting-Piecewise-Linear, I change my codes, the codes can be compiled, but when I use it to fit my data, just get 2 segments. Here paste my codes, hope you can give me some comments or ideas. Thanks a lot.
Lyapunov exponent of coupled map lattice having one dimensional local map is known. But what is the theory behind the Lyapunov exponent of cml having 2D local map?
I'm interested in applying Non-equilibrium Thermodynamcis for Glassy Polymer (NET-GP) [1] framework to Statistical Associating Fluid Theory (SAFT) variations. Although the NE-SAFT models were reported multiple times in the literature [2], none of them explained how to do this starting from equilibrium SAFT codes/programmes (such as Matlab, python). The papers generally just write "determined by numerical method" in MATLAB, which doesn't offer too much insights.
The biggest issue (that I can identify) is conventional equilibrium SAFT programmes takes temperature (T) and pressure (P) as independent variables, whereas in the NET-GP framework, the independent variables are instead be temperature (T) and another volume(V)-dependent variables (such as polymer volume or polymer density).
Given this information, how should I modify a conventional SAFT code to produce NE-SAFT? Is there a quick work around this (T,V) dependency? Or, would the only way be rewriting all SAFT equations to take (T,V) as indepedent variables?
Is there any computational tool/software/program code to convert/map illumina probe set IDs to gene IDs for a large data file...
Let's say I have two dimensions:
Dimension 1: 2 categories
Dimension 2: 4 categories
If I were to code information based on these dimensions and I wanted to compare the agreement (e.g., dimension 1 = 90% vs. dimension 2 = 70%), considering the difference of categories in each one, is there a statistic that weights by the number of categories?
Would it be more appropriate to dichotomise the categories of dimension 2 to calculate agreement? I mean calculating the agreement for each category of dimension 2 separately, so that it is "category 1 = yes or category 1 = no", "category 2 = yes or category 2 = no", etc., instead of calculating it for all 4 categories together. This way you would calculate the agreement for each category in dimension 2 separately, but the results could be compared with the results for dimension 1.
I would also appreciate bibliography on the topic.
Thank you!
May I have the MATLAB code of some well-known multi-objective benchmark functions like Schaffer, Fonseca, ZDT1, ZDT6, Srinivas, DTLZ5, DTLZ6, LZ09_F1, LZ09_F2, LZ09_F6, LZ09_F7, LZ09_F9, WFG4, CF1, CF2, CF4, CF5, CF6, CF9, and CF10?
When I try to perform the following calculation, Python gives the wrong answer.
2*(1.1-0.2)/(2-0.2)-1
I have attached a photo of the answer.
Dear all,
I'm just starting to teach myself how to make macros to process confocal images in ImageJ (FIJI). I've managed to make a macro that does most of my processing using the record macro function (it has already saved me an enormous amount of time!), however I'd like to add brightness and contrast adjustment to the processing.
My current process involves:
1. Splitting the multi-channel images
2. Applying grey LUT to all
3. Saving all in a temporary folder
4. Creating a merged image of the channels is various colours
5. Creating a panel with split and merged images
6. Adding a scale bar
7. Saving and closing all windows
However, when I look at the recorded code produced by the brightness and contrast adjustment dialogue, I get code which doesn't seem right to me:
//run("Brightness/Contrast...");
run("Apply LUT");
I've also tried the following code, however it automatically maximises the contrast, where I'd like to use the brightness and contrast adjustment to reduce background in some channels.
run("Enhance Contrast", "saturated=0.35");
Is there anyone else using macros for brightness and contrast adjustment in ImageJ?
Many thanks,
Sam
I am presently writing a python script DFT based microkinetic model for a biomass project I am working on. At the moment, I needed to calculate the DRC but I am having the challenge writing the appropriate code in python. I will be very grateful if anyone can assist me get pass through this stage.
I look forward to your kind response.
Many thanks,
Shed!
I used "sf" package in Rstudio to label country and state name but I always get the same error message when I tried to run the st_centroid command.
> st_centroid(world, of_largest= FALSE)
Error in wk_handle.wk_wkb(wkb, s2_geography_writer(oriented = oriented, :
Loop 96 is not valid: Edge 743 crosses edge 998
In addition: Warning message:
In st_centroid.sf(world, of_largest = FALSE) :
st_centroid assumes attributes are constant over geometries of x
I want to write a descriptor for calculating interactions between a protein-peptide complex, then create a NN for docking.
I found some codes but it's for protein-ligand complex.
Hi there
I used Atlas.ti back in the days, and I remember that after you made a code, the text would highlight itself in the color connected to the code. With this you could easily see what you have coded/not coded.
Now with Atlas.ti 8 it just makes this small black line at the end of the sentence which is quite hard to see.
Can I highlight the text with the same color as the code?
Thank you!
I am trying to investigate the impact of multiple technology adoptions on production costs using seven years (2015–2021) of panel data. My sample size is 405. The sample is separated into two groups: the treatment group and the control group.I would like to use the difference in difference (DD or DID) model to access the impact. My coding includes "0 = non-adopter, 1 = partial adopter, and 2 = full adopter".
At present, I am struggling with coding issues, as in the years 2015 and 2016, the treatment group adopted only one technology, and from 2017 to 2021, they adopted both technologies.
YEAR Tech A. Tech B BOTH
2015. 0 1 ?
2016. 0 1 ?
2017. 1 1 2
2018. 1 1 2
2019. 1 1 2
2020. 1 1 2
2021. 1 1 2
I would kindly like to know whether I should code 1 or code 2 for 2015 and 2016?
Thank you very much.
THERMODYNAMIC ANALYSIS OF TRIPLE EFFECT ABSORPTION CHiLLERS
Energy and Exergy analysis .
I have heard that the new BME688 sensor can be used to detect Volatile Organic Compounds (VOCs), volatile sulfur compounds (VSCs), and other gases such as carbon monoxide and hydrogen. However, we could not really understand how to interface this sensor and which codes to use in order to obtain these data. Presently, we know of codes that can help us find out Air Quality Index but that is limited to temperature, pressure, and humidity. If anyone has researched about it or knows a way to obtain the other data using Raspberry Pi, please let me know! Thanks!
Hello
I am going to purse a master in sustainable chemical engineering this year. I want to ask you what IT skills do you think that I will need to? For having a better vision, I attach my Curriculum.
Hi everyone!
Has anyone tried to use this antibody for immunofluorescence in brain slides? My problem is that I've done a double immunofluorescence for Iba1 (WAKO) and TREM2 in rat brain slides, but the labels don't colocalize. The TREM2 antibody seems to label cells that are Iba1-negative (I've attached a confocal picture of the IHQ showing both labels).
I hope somebody can help me!
Thank you in advance!
Please, when calculating an dihydrocresol isomere adsorption at PBE+TSHI in VASP code on solid metallic surface, the Job ended with this error message: some force-field parameter for the following atom is not defined: H 11 H 16? I am really looking for a way to overcome it.
I am currently working on the identification of the parameters of a solar panel (5P) using (PSO and ABC) algorithms. But unfortunately I find it hard to code these algorithms. Please is what you can send me the matlab codes of the PSO and ABC use to the parameter identification, thank you.
I have developed some codes which is given vaguely below:
bigFunction( ){
Fun1( ){
}
Fun2( ){
}
Fun3( ){
}
}
i = 1:2000
a = sapply(i, function(i) bigFunction( ))
This bigFunction gives 2000 values
a1 = mean(a) which gives 1 value
Now I would like to change an input of bigFunction, which then gives another 2000 values and find
a2 = mean(a).
This will happen over and over again for a total of 100 times, and if possible the changes in the parameter values happen simultaneously so that at the end, I will have a1, a2, a3, ..., a100.
Dear Colleagues
I am using a UMAT for modeling the hysteresis behavior of a Material.
It is a simple UMAT which modify the elastic modulus of the material at different strains in order to reproduce the hysteresis behavior.
I need to convert it into a VUMAT for Explicit Dynamic simulation in ABAQUS.
What should I change in My code?
I have attached my code.
How can I generate all probability of non repeated combinations of .mat files?
For example there are the following mat files : One.mat , Two.mat , Three.mat
and each of these files contains 2 vector for example: One.mat contains V1 and V2 vectors, Two.mat: contains C1 and C2 vectors , Three.mat: contains D1 and D2 vectors.
So I want to generate the following combinations:
Comb1.mat contains: One.mat and Two.mat
Comb2.mat contains: One.mat and Two.mat
Comb3.mat contains: Two.mat and Three.mat
Comb4.mat contains: One.mat ,Two.mat and Three.mat
Is there any MATLAB code can help me to achieve this?
Note: I don't want to do this manually because there are many groups of mat files and I want to make a combinations of them and this will take time.
Dear researchers,
I am conducting a meta-analysis using a random-effects model in which some studies have a single effect size, while other studies have 2 or more effect sizes (2 or more subgroups). For example, study A has OR for overweight and obesity combined, while study B has separate results for overweight and obesity. These are my studies (fictional):
A overweight+obesity OR=1.4
B overweight OR=1.1
B obesity OR=1.8
When conducting a meta-analysis on such studies it is advisable to combine B overweight and B obesity using a fixed-effects model (since they have the same control group) and then to perform a meta-analysis using a random-effects model (i.e. conducting meta-analysis using the study as a unit of analysis). Can anyone please tell me how to perform that in R?
I am running this code (below) to add error bars onto my bar chart, i am trying to to work out the standard deviation (sd) of shannons diversity index
The code works up until the second/third last line
geom_errorbar(aes(ymin=shannon-sd, ymax=shannon+sd), width=0.2,
position=position_dodge(0.9))
This error code keeps coming up
"Error in shannon - sd : non-numeric argument to binary operator"
but this is my data...
Exposure genus shannon sd
Exposed Crustacean 0.000000 0.00000000
Exposed Mollusc 1.199625 0.13291129
Exposed Seaweed 1.513125 0.42093822
Sheltered Crustacean 0.025500 0.07212489
Sheltered Mollusc 1.156750 0.26763341
Sheltered Seaweed 1.848125 0.27264128
anyone know where im going wrong?
library(ggplot2)
#+++++++++++++++++++++++++
# Function to calculate the mean and the standard deviation
# for each group
#+++++++++++++++++++++++++
# data : a data frame
# varname : the name of a column containing the variable
#to be summariezed
# groupnames : vector of column names to be used as
# grouping variables
data_summary <- function(data, varname, groupnames){
require(plyr)
summary_func <- function(x, col){
c(mean = mean(x[[col]], na.rm=TRUE),
sd = sd(x[[col]], na.rm=TRUE))
}
data_sum<-ddply(data, groupnames, .fun=summary_func,
varname)
data_sum <- rename(data_sum, c("mean" = varname))
return(data_sum)
}
df3 <- data_summary(diversity, varname="shannon",
groupnames=c("Exposure", "genus"))
# Convert dose to a factor variable
df3$genus=as.factor(df3$genus)
head(df3)
p <- ggplot(diversity, aes(x=genus, y=shannon, fill = location)) +
geom_bar(stat="identity", width = 0.5, position=position_dodge()) + theme_minimal() +
geom_errorbar(aes(ymin=shannon-sd, ymax=shannon+sd), width=0.2,
position=position_dodge(0.9))
p + scale_fill_brewer(palette="Paired") + theme_minimal()
What is the most flexible, user-friendly, and easy to update presentation tool for online teaching that can be used by academics with different fields of study?
Teaching interdisciplinary courses needs collaboration with different people with different brain structures. For instance, someone with excellent coding skills may not be able to work on graphical presentations efficiently. On the other hand, another can collaborate with them with an appropriate demonstration and imagination but, may not have a coherent knowledge of coding. How can they cooperate for a better teaching method together?
How can you share the programs and codes-like MATLAB code- which you have used in your paper. so that you can reference them scientifically? is there any scientific journal to do it for free?
Hello everyone, I’m wondering if anyone can help me in using the finite element analysis code “FEMIX.4”. As i want to simulate the non-linear behaviour of concrete utilizing 3D multi directional fixed smeared crack model using FEMIX. And i’m not very familiar with such computer codes.
thanks in advance.
I have a large database of posts (Yaks) collected from the social media app Yik Yak (back when it was still active). I used a program to collect Yaks at random intervals from 50 randomly selected universities, stratified by US Region, "Locale" (rural vs urban), "Control" (private vs. public), and Size (large vs not large). We collected over 115,000 Yaks. My students used NVivo 11 Pro to code the data (in this case, we were looking at substance use related posts), and we ended up coding 1670 Yaks as into our Substance Use "Node." Now, I want to be able to view (and export) all 1670 of those Substance Use references, along with their associated attributes (region, locale, control, and size). We have been trying for days and can't figure out how to do this. It seems like a basic function that should be easy. Can anyone offer any help?
We want to export the data in order to run various chi-square and regression analyses, and I want the whole data set (all 1670) and not just the summary data (i.e. summed frequencies of substance use Yaks for each region). This way, the appropriate attribute values will be connected to each specific data point during the quantitative analyses.
Hello everyone,
I'm trying to model the motion of a cylinder in both CF and IL directions, but the UDF code that I have only can move the cylinder in the CF direction. Sadly I have never had the chance to learn to code UDF. Can someone please help me?
I will share the code below.
I really appreciate any help you can provide or at least promote this question to find the best answer to my issue.
FYI, this code is for Vortex-Induced Vibration of cylinders
####################################################
#include "udf.h"
#include "dynamesh_tools.h"
#include "unsteady.h"
FILE *fout;
static real v_prev;
DEFINE_SDOF_PROPERTIES(stage,prop,dt,time,dtime)
{
Thread *t;
Domain *d=Get_Domain(1);
real x_cg[3],force[2],moment[3];
real cg; /*Center of gravity position*/
real vel; /*Cylinder velocity*/
real Fy; /*Lift Force*/
real mass=8.88; /*Cylinder mass*/
real fn=1.06; /*System frequancy*/
real wn=2*M_PI*fn; /*System angular velocity*/
real z=0.002; /*Damping coefficeint*/
real k=mass*wn*wn; /*System stiffness*/
real c=2*mass*wn*z; /*System damping*/
/*Get the thread pointer for this which motion is define*/
t=DT_THREAD(dt);
prop[SDOF_MASS]=8.88; /*System mass of the 6DFOF*/
prop[SDOF_ZERO_TRANS_X]=TRUE;
prop[SDOF_ZERO_TRANS_Z]=TRUE;
prop[SDOF_ZERO_ROT_X]=TRUE;
prop[SDOF_ZERO_ROT_Y]=TRUE;
prop[SDOF_ZERO_ROT_Z]=TRUE;
cg=DT_CG(dt)[1];
vel=DT_VEL_CG(dt)[1];
prop[SDOF_LOAD_F_Y]=-k*cg-vel*c;
fout=fopen("results.txt", "a"); /*Open file*/
/*Wrtie data into file*/
fprintf(fout, "%g %g %g\n",time,DT_CG(dt)[1],DT_VEL_CG(dt)[1]);
fclose(fout);
}
Hello;
I am currently working on a UDFusing the DEFINE_HET_RXN_RATE for Heterogeneous reactions. My idea is to use VOF to describe a simple method of VLE using the vapor pressure of condensable components (in my case water and methanol). I am using the Evaporation-Condensation UDF provided by Ansys in their manual UDF book as a baseline.
By now, I am having trouble with filtering the rate of the reaction based on their names. I was trying to use STREQ and !strcmp command, but every time I compile it gave me the following message:
- error: member reference base type 'real' (aka 'double') is not a structure or union
Besides a problem with an exponent where its values depend on the temperature, it is the only problem I have so I can test the UDF.
Hi.
I am trying to plot Euler's diagram with counts and percentages of counts related to the total cohort (n=1615). I have written the following code:
plot(euler(c(
"LUTS"=98,
"Symptoms of UTI"=46,
"Positive urine culture"=39,
"Symptoms of UTI&LUTS"=33,
"LUTS&Positive urine culture"=24,
"Symptoms of UTI&Positive urine culture"=22,
"Symptoms of UTI&LUTS&Positive urine culture"=18),
input = "union",
shape = "ellipse"),
key = TRUE,
counts = TRUE,
quantities = list(type = c("counts", "percent"), font=3, round=2, cex=0.8),
fills =list(fill=c(viridis::plasma(n = 3))), alpha = 0.3,
c("#1957FF", "#FF750C", "#FF220C"), alpha = 0.3,
edges=list(lty = 1),
factor_names = TRUE,
labels=list(font=2, cex=1),
legend = FALSE)
and expectedly have got the graph attached.
My question is how can one replace the percentages with custom ones, e.g. "59 (3.65%)" instead of "59 (48%)"? Thank you in advance.
I recently started working on a meta-analysis involving 73 studies, however when the forest plot diagram is made, i am unable to zoom out of it to view my result. the only way i can view it is if i rotate my screen to male it be in portrait mode.
is there any settings or packages on R which allows me to do so? I have tried the zoom package but it didnt work.
thank you.
I have multiple files in a folder which has similar names but with different numerical labelling. For example, the files are named as phi=1_theta=1, phi=1_theta=2, phi=2_theta=1 and so on. The labelling of the file is attached in this question as well. How to write a suitable Fortran code to open these files using loop?
There are overall 16 phis' x 16 thetas' files in the folder.
I work on spectral method for solving ODE and PDE. If you have any equation to solve, send me the govening equation, the boundary conditions and the initial conditions and I can propose a matlab code to solve these equations.
Best regards.
To study the bituminous mastic behavior, a test called stress relaxation is done it shows the solid or fluid-like behavior of mastic. By generalized maxwell model, one can predict the relaxation of mastic. How to enter a constitutive equation of model in the coding.
I am trying to map NO2 using the NASA nc files using spyder (python). These files contain the scientific data sets (sds) and can be converted to CVS format and opened with excel. The excel files then contain data for the quantity of that specific pollutant, in this case NO2 with the format e.g. 9.97E-36. How do I map this? Do I convert it into python file first? because those can be mapped easily. But would that require coding? Please help.
I have performed all the attack for my image cryptography algorithm. finally i need to test NIST results for my cryptography algorithm. if any one have the code kindly share the code. please do the needful
We have feedback from participants. We have created codes. We subcategorized them. Instead of fully categorizing and creating themes, we want to put the codes under the related theme to answer our research questions. These themes include our research purpose what we want to investigate.
I wonder what approach does this corresponds to?
Deductive approach maybe?
Thank you in advance!
Hi, I am planning to collect matching data from employees and their supervisors about some organiational variables via online survey. I have studied researchers assign some codes to group the data togather. Can anyone please explain how to assign the codes or how to collect this kind of grouped data online? Sorry, I am going to do this for the first time. Thanks.
Hello everyone
So I'm using Jupyter for my python programming and coding, but some of the functions and methods aren't working properly, and some don't even work.
What version of Jupyter and Anaconda should I use? Or even should I start using Anaconda2 instead of Anaconda3?
ThankYou in advance for responding to my question
Hello,
I have written a UMAT subroutine but when I run the code it runs and completes successfully. I am using C3D8 elements. The subroutine is working fine, as the job completes successfully. However, when I try to see the results from the ODB file I get the following message error:
“The selected Primary Variable is not available in the current frame for any elements in the current display group.”
Could this be because there is something wrong in my subroutine? I will appreciate your input. Please see the UMAT file attached.
I have tried to select the stress, strain and magnitude and all not available.
I have tried to run the input file with an in-built material model (say, linear elastic) and I did not encounter the same issue. Everything was OK.
Can someone provide me with the R code for Network Analysis where I can establish the relationships among the variables using the underlying concepts of SNA using the data from Social Media. I have been going through the codes on Stackflow and GitHub but the machine time and processing time is very high when working on data extracted from Social Media.
Your help will be acknowledged by mentioning your name in the published manuscript on this work.
Dear All,
I am trying to calculate the bandstructure of zigzag graphene nanoribbon by creating large two dimensional supercell. So basically, the system is constructed as an array of graphene nanoribbon in which the separation distance between the ribbons is large. The input for the kpoint in my scf calculation is chosen to be kpoint automatic. Every Nosym parameter are set to be false.
My question is why the number of kpoint that is actuallly generated by the code is more than the input?
Ex: for kpoint 12 12 1 0 0 0 -> total kpoint is 148
for kpoint 40 10 1 0 0 0 -> total kpoint is 604
If Im correct, because of the symmetry usually the number of kpoint will be reduced, but I dont understand why in this care the number of kpoint become larger than the input?
Thank you,
Satrio
Im trying to create a heatmap with a matrix of RNAseq cpm that has been filtered for low expression and sorted by variance. The matrix is as below.
The gene names in the following matrix corresponds to the row numbers of the matrix above
When I make the heatmap I get the row numbers of the first matrix instead of the gene annotations so I would like to produce a matrix that looks like the following (edited in paint, not actually from R).
Is there any way i can do this? I would require the gene names to correspond to their respective gene numbers in the 1st matrix, in reference to the 2nd matrix
i.e if the row number in matrix 1 is 14, the gene ID should be dnaK
If anyone could help me. It is my dissertation work. Thank you. I am looking for the matlab code to solve PDE using RBF.
My real system (buck converter) can only take an input of 0 to 1 (duty ratio) and I need to constrain the system so the controller action keeps within this bounds. How can I do this?
The code for the project is below in three images, it is Steve bruntons genetic algorithm coding. https://youtu.be/S5C_z1nVaSg?t=470
I assume the u variable is the duty ratio but I am not sure.
The last image is the control system attempted to be controlled. The transfer function is an adequate fit for the system.
Thanks in advance, if anymore information is desired please ask away.
Where can I get the code for K prototype algorithm for mixed attributes? Has anyone implemented it in any language?
Hi, I did a literature review on innovation diffusion theories (Rogers), and I attempted to test the practicality of these theories when introducing novel medical devices. I did semi-structured interviews with clinicians. So is it correct that I am using a deductive method since I am looking for existing concepts in my data? What method should I use to analyse the interviews? Thematic? Content? Thank you!
Hallo everybody,
I need to calculate RUMs (Rigid Unit Modes) for a simple inorganic structure of the hexagonal (P6/mmm) polymorph of WO3. In my opinion, there should be no RUMs there, but I need a proof. Can anyone help?
I DL'ded the code, but got a lot of errors from the online fortran compiler...
Hi, I am cleaning and reorganizing some data and there are some codes that I removed all quotations from and still, I cannot remove them. The option to delete them is not available. Does anybody have a solution to recommend? Thank you.
Hey guys,
There is a strange thing that is going on with my laptop.
I receive messages like:
ssh is not recognized as an internal or external command
but I know I have installed both SSH client and server.
I have tried a variety of methods to figure out why I cannot run these commands on my laptop.
Also, I found that the top side of my terminal (cmd) that introduces the terminal (I have attached a photo here so you can see what's going on) does not mention "system32".
Now, my goal is to be able to run my commands such as SSH.
What is your suggestion?
Thanks!
Maryam
Hello,
I'm trying to create a genind object of an SNP dataset from Whelk to examine allelic richness between sites for some COI_16S sequences (combined sequences from the same individual).
My dataframe is in long format with individual SNPs coded as nucleotides(A, T, C, or G) and uninformative sites coded as "NA". There were two columns in the initial dataframe identifying specimens and populations that were selected out as a character vector "ind" and factor "popchar", respectively, and removed from the dataframe prior to converting. An additional character vector indicating loci names was also created, "locichar." The final dataframe contained the SNPs alone and was called "Whelk_SNP_file_dataonly".
Here is the code for the function df2genind():
df2genind(Whelk_SNP_file_dataonly, sep=" ", ploidy=1, ncode=1, ind.names=ind, loc.names=locichar, NA.char="NA", pop=popchar)
Each time I do this and look at the object using head(), I get this:
// 1 individual; 33 loci; 66 alleles; size: 15 Kb
// Basic content
@tab: 1 x 66 matrix of allele counts
@loc.n.all: number of alleles per locus (range: 2-2)
@loc.fac: locus factor for the 66 columns of @tab
@all.names: list of allele names for each locus
@ploidy: ploidy of each individual (range: 1-1)
@type: codom
@call: .local(x = x, i = i, j = j, drop = drop)
// Optional content
@pop: population of each individual (group size range: 1-1)
The file includes data from 242 individuals, not one. I cannot figure out why the function is reading my data as one individual. Does anyone have an idea of why adegenet is doing this and how I might fix it?
I am looking for a rigorous way to code and analyse visual data (e.g. photos of products) for insights. In specific from a phenomenological perspective.
Currently working on a lunar rover for a university masters project, my colleagues have successfully developed a code that considers not only where the solar panels are situated in relation to each other due to the rover shape (1 on each side ) but also the long and lat for the position on the moon.
I reach out to people in the relevant field for a helping hand in developing this code as it fails to include the sun elevation, although it remains low when at the poles which is the likely landing site it will change the power output of the panels especially when terrain is also considered.
Advice/Tips/Suggestions fully welcome!!
Thank you.
I want to implement only a Gate similar to forget gate or output gate of LSTM cell. Mind it...!!!
I am not implementing the entire LSTM cell rather I have certain input and previous output and I want to pass them through a Gate with tanh or sigmoid as activation. Is there a way to do that in Keras or in Pytorch. Please suggest.
This is my code about pso-pf , code.m is main code. I referece <PSO Algorithm Particle Filters for Improving the Performance of Lane Detection and Tracking Systems in Difficult Roads>.
But I have try to combine them,but the result is so bad .if someone have free time,please help me.
When you have two experts collaborating on a problem, you get better quality data, in less time and grant money. Yet, it astounds me the number of researchers who insist on writing code themselves.
To address this, I started a program that connects science researchers with volunteer software developers, my hypothesis is that facilitating these collaborations will result in better, more customizable software. I'm looking for researchers who want to participate in the pilot of this program -- someone who has a coding project and is willing to collaborate with a software developer to get it built.
If you're interested, could you contact me? Thank you
Heather Claxton, PhD
We undergone so much problems with NVivo (activation, compatibility PC vs MAC) and it is so much expensive.
The question is whether (A) an analog system, or (B) a digital system, reflects the reality we see.
The A solution can be represented by (1) derivatives, and (2) include the hypothesis of continuity. They both, (1) and (2), match each other; without continuity there is no derivative, and without derivative there is no continuity. They both were used by Isaac Newton in his theory of "fluxions" in calculus. A fluxion is "the term for derivative in Newton's calculus" [1].
The B solution cannot be represented by derivatives, nor include the hypothesis of continuity. Again, the absence of both conditions, (1) and (2), match each other. This fact (i.e., the absence of (1) and (2) is not important) remained hidden for centuries in the fake controversy of primacy of calculus that followed, and was flamed by Newton as president of the Royal Society, against Leibniz, in 1713.
But Galois, around 1830, rediscovered a problem standing for 350 years, determining a necessary and sufficient condition for a polynomial to be solved by radicals, allowing calculus to be done by finite integer fields or Galois fields, thus eliminating the need for continuity in calculus.
How? Usual calculus requires continuity for the existence of derivatives, based on the four operations of arithmetics. It does seem necessary to require continuity, as Cauchy did in analysis in the field of real numbers. However, in the field of finite integers, such as Galois fields, calculus can be defined exactly, not requiring continuity.
Continuity is therefore an artifact of the formulation, and should be avoided. This complies with quantum mechanics and the work of Leon Brillouin, in 1956. It is fictional to consider continuity in mathematics, physics, computer science, and code. We are led today to consider finite integer fields, such as Galois fields, in calculus. We eschew the considerations of so-called "real numbers," as they include irrationals, which cannot be counted. The sum of two numbers in a Galois field is always a number in a Galois field. The sum of two numbers in the real set is never an infinitesimal; they can never be created nor exist.
The conclusion is that digital signal processing is the reality, not analogue processing. There is no effective quantization in digital processing, the quantum nature simply asserts itself. And this changes how we should view calculus: continuity is not required if one uses Galois fields. What is your opinion?
Hello
I've collected socio-economic data using ODK that I'm analyzing using R software. I have a multiple choice question with several modalities that I would like to have the frequency of each modality.
I tried to convert all modalities of the variable into boolean variables using this code:
qcm$q0_cacao <- ifelse (str_detect (qcm$farm_type, pattern = "cacao"), 1, 0)
qcm$q0_cafe <- ifelse (str_detect (qcm$farm_type, pattern = "cafe"), 1, 0)
qcm$q0_anacarde <- ifelse (str_detect (qcm$farm_type, pattern = "anacarde"), 1, 0)
qcm$q0_hevea <- ifelse (str_detect (qcm$farm_type, pattern = "hevea"), 1, 0)
....
....
I would like to calculate the frequency of each modality under R and have it in tabular form as follows: https://www.dropbox.com/s/jeofweytnhansn3/result.xlsx?dl=0
I know how to do it using Stata using this command "mrtab q0_cacao - q0_xxxx by (village_name) column" but I would like to do all my analysis using only R.
Hi!
I graduated with a Neuroscience and Psychology BSc in 2016, and my final year project was a bench-lab in vitro study of omega-3 oils applied to fluorescently-stained cortical neurons. I realised that although I had a passion for neuroscience, my first foray into applied research was not promising; I hated the tedium of the process.
I then did an MSc in Mental Health Sciences, with my thesis originally using an EEG/MEG dataset on psychosis patients. Again, I loathed trying to learn (the tedium of) Matlab and learnt instead that I'm fundamentally not a coder.
I changed to studying the mid-term after-effects of a psychedelic drug using a battery of questionnaires. I loved this psychometric approach, and it was easy enough running simple statistical analyses. However this was firmly psychological, not neuroscience.
Now I'm doing a PhD largely employing thematic and other qualitative analyses of altered states, which I thoroughly enjoy. But again, I'm still attempting to retain some know-how in neuroscientific methods. I'm doing some basic secondary EEG analyses on the aperiodic signal - though not loving even the rudimentary coding necessary.
My question is what are some other neuroscience techniques available out there which I may be more suited to? Can one even be a 'neuroscientist' without having to use a wet-lab - or matlab?
Perhaps tDC or tMS? Could someone maybe elaborate on the types of questions that can be answered with these; or the analyses that are run after them?
Other than f/MRI, which I assume always require programming - do other neuroimaging modes e.g. PET/SPECT also require this?
I'd also be very interested to hear people's thoughts on/experience with working to incorporate as much understanding of neuroscientific findings in strictly psychology studies - While writing psychology papers, referring strongly to the neuro literature to inform and discuss the study's aims and findings?
Thank you very much,
I'd be really appreciative of any insight :)
Pascal
We are working on a qualitative project that uses telephonic interviews. Most of the interviews have been transcribed so we want to start the coding process. We cannot afford Atlas.ti or NVivo, but many open source software programs either do not cater to multiple collaborators or are limiting. For example, we found Taguette really easy to use however it does not support nested/hierarchical coding or the visually pleasing multiple coloured highlighting.
Could you tell us about any other open source softwares that would overcome the above-mentioned shortcomings? We're open to using Google Docs and Sheets, however, to be honest we are still figuring out how to go about it without causing issues in the later stages.
Note: We're really new to qualitative data analysis
The data coding process using NVIVO 12 software is relatively straightforward, but I'm having difficulty using the hierarchy charts to visualize the information in terms of demographics (e.g., age, gender, etc.). Does anyone have any tips?
It will be very helpfull if one can provide me for example an excel sheet or like this showing how applying the NLDFT to calculate pore size distribution from experimental N2 adsorption isotherms ?
Regards,
This might be a long shot.
I am asking this question because I am currently writing my thesis proposal. It will be about the advantages and disadvantages of using VR in primary and/or secondary education.
Concerning the economic advantages, I am going to list the reduced costs as an advantage (i.e. geography trips).
However, I wanted to further elaborate on this and explain how education is of course benefiting from VR technologies but VR technologies are actually going to benefit from education too.
I believe I will need real life case studies of schools introducing students to VR development/graphic design to prove my point.
I am of course not referring to advanced courses.
I am looking for case studies from anywhere in the world. Curricular and extracurricular classes in schools.
I will then be able to prove that some schools are going to benefit the job market by training future VR specialists.
Many thanks in advance for your reply.
Good evening!
I am trying to prove the existence of Asset-Based Poverty Traps (Carter & Barret, 2006) by using the Threshold Estimation developed by professor Bruce E. Hansen in his paper "Sample splitting and threshold estimation", Econometrica (2000). I have already downloaded the STATA program from professor Hansen's site: http://www.ssc.wisc.edu/~bhansen/progs/ecnmt_00.html. However, I have not been able to install both codes he uses in his programs: "thresholdtest" and "thresholdreg" (I tried findit "thresholdtest"/"thresholdreg" and I found no results)
In that sense, I would really appreciate if somebody helps me out with the source of installation of both codes in STATA.
Thanks in advance
Édgar
I am doing research about neutron spectra measurement and interested in response function calculation. Then i want to know the theory in NRESP and NEFF. Please share your literature with me, Thank you !
I want to apply nsga-iii algorithm on to a dataset for feature selection, could anyone here help me with this. i have code for nsga-ii in matlab and dataset in excel file(some attack data).
Hello everone;
I am using the following code to excute Dinkelbach algorithm as in the paper attached here. I have problem with the Initialization of P values, in the code it is initialized but the sum function does not consider these values. Does anyone has an idea how to solve this problem?
Thank you in advance.
Hello
Please can you help me to write this equation on Matlab 2015a !. I have tried many types of writing but I could not get any result.
Thank you for your help in advance.
I am looking for Matlab code for Multiple imputation method for missing data analysis, any body can help me I will be grateful.
Dear all!
I am currently looking for code (preferably Matlab) to solve the time-dependent drift-diffusion equations for a mixed ionic-electronic conductor with a single ionic and single electronic species, e.g. the material is a hole-conducting semiconductor and has a single, single negatively charged, mobile ion. Contacts can be considered ion blocking and no chemical reaction should occur.
I have found several paper outlining the solution for a steady state condition (e.g. Y. Gil et al., Solid State Ionics 179 (2008)); but I was unable to find something similar for the transient solution.
Unfortunately my programming skills have steadily degraded ever since I left college and I am looking for a more detailed outline to avoid the usual pitfalls ;)
If someone has recently come across something similar I am very grateful for any help!
Yours,
Chris
I am not able to understand how the length of the string is compressed with this algorithm , for example if the input string is abcabcbfc then the output will be
abc(0,3)b(3,1)bf(3,1) here the size of encoded string is not decreased. So am i missing out on something , Can someone explain this thing.
I am solving hard stiff ODEs and need an efficient ODE solver for these systems. A MATLAB code would be very much appreciated
Software engineering Software Effort estimation
I need to introduce Cr as a trap in the bulk area of my device which I am simulating using SILVACO TCAD. Can anyone help me to do this?
I'm trying to clone the respository for VCFtools into GitHub and configure it. These two scripts always run successfully in GitBash:
git clone https://github.com/vcftools/vcftools.git
cd vcftools
However, I cannot get the next two scripts to work:
./autogen.sh
./configure
Whenever I try the first script, I get the following error message:
matth@DESKTOP-FKTJ1JU MINGW64 ~/vcftools (master)
$ ./autogen.sh
./autogen.sh: line 3: autoreconf: command not found
All the files are in the GitHub folder on my computer (the GitHub folder is in Documents, in case that makes a difference). I need this program for my Masters thesis. Does anyone have any idea what's going on?
Please help if you can. Thanks in advance.
I have attached a file of problem which I have to code in Matlab.
If delay is constant, then i can code it using dde23 but in the case when tau(t) = exp(t)/(1+exp(t)). We will have to use ddesd function to solve but how to code using ddesd .
I want to send data with a Visible light
communication (VLC), is there an approved method, or do I have to program anything ( file text , pdf multimedia mp4 mp3, file winrar .....) I want to send random data ("PRSB") is it useful for sending?
Hello
I am working on the following: I have 65 exsiting definitions about a concept. From these 65 definition I try to distill a representative definition via coding procedures. Thus, I code each definition to find patterns.
Now I have all definitions coded. And I can see how many times a code is used in the exsisting definitions. For example: code X is present in 20 of the 65 definitions.
As I now want to develop a representative definition (and not one which contain all codes) I would set a cut-off-level. A cut-off-level in the sense of: only codes which are more then e.g. 20 times present in the exsiting definitions will be included in my representative definition.
Now I was wondering, is there a way to determine the cut-off-level statistically?
I am looking for Matlab code for EM algorithm for system identification models (i.e ARX or ARMAX)
Does anyone have experience with the feature "coding focus groups" in ATLAS.ti 8? I have data of focus groups to analyse (Grounded Theory) and I am wondering about what might be the advantages of this feature in the software. Is it about having direct links between quotes and speaker?
Does anyone have a matlab code for solar PV cells?
Hi,
For my master thesis I am exploring the effects of different types stereotype activation (i.e., stereotype reactance, stereotype threat and no stereotype activation) on negotiation performance. Based on the literature, I believe that the communication style (i.e., flirtatious and non-fliratious) of the confederate might moderate the effect of stereotype activation on negotiation.
What I have done so far is dummy coded both the predictor variable (no stereotype activation = 0) and the moderator variable (non-flliratious communication = 0). Additionally, I computed two new interaction term variables with dummy variable stereotype reactance * dummy variable flirtatious and dummy variable stereotyp threat * dummy variable flirtatious. But I am not sture what I have to do next. I've tried the regular Linear Regression test in SPSS, I placed both dummy variables of stereotype activation and the dummy variable of communication style in the first model. Then added the interaction term variables in the second model.
I've also tried the Hayes' process moderation test. I conducted two seperate test because it doesn't allow you to insert multiple IV's. But not sure if this is the right way to go.
Could anybody explain to me how to conduct a moderation analysis with both the predictor and moderator as categorical variables? Or am I on the right path? If so, what do I do next?
Thank you very much!
Nikki
Although my code does not have error apparently, it doesn't offer proper solution. I have attached some parts of my code.I would be grateful if you could give me some tips on how to implement sos1 variables in gams.
In a Two bay portal frame should I consider these 2 cases :
the live load at each bay alone or only the live load on the total roof? and where I can find the reference for the answer of this question? ( code )same question for wind and dead load.
Since If you check the typical two span portal frames, the middle column is too small with respect to the edge columns. Apparently they are considering only the case where the load is on the total roof. But why? is this part of code?