Science topic

Coding - Science topic

Explore the latest questions and answers in Coding, and find Coding experts.
Questions related to Coding
  • asked a question related to Coding
Question
3 answers
I want a bifurcation diagram of two ode and the parameter tau. The existing code in matlab are attached.  
Relevant answer
Answer
See this video explaining the Matlab code for plotting the bifurcation diagram, with code available in the description https://www.youtube.com/watch?v=O506G8DtmQk
  • asked a question related to Coding
Question
2 answers
This is a code block from nutWallFunction library in OpenFOAM where in, effective kinematic viscosity ($\nut_w$) at the wall is calculated using resolved field(in case of LES)/ mean field(in case of RANS) and $y^+_p$ (wall normal distance of the first cell center). this allows to set a new viscosity value as boundary condition at the wall using log law. Considering the first cell center is in the logarithmic layer of the universal velocity profile.
Now, in this code block of member function defined as nutUWallFunctionFvPatchScalarField::calcYPlus()
There has been iterations done for the yPlus value to reach convergence with maximum of 10 iterations. Why are these iterations needed? and why is the maximum number of iterations 10. I have given a reference of the code below;
tmp<scalarField> nutUWallFunctionFvPatchScalarField::calcYPlus
(
const scalarField& magUp
) const
{
const label patchi = patch().index();
const turbulenceModel& turbModel = db().lookupObject<turbulenceModel>
(
IOobject::groupName
(
turbulenceModel::propertiesName,
internalField().group()
)
);
const scalarField& y = turbModel.y()[patchi];
const tmp<scalarField> tnuw = turbModel.nu(patchi);
const scalarField& nuw = tnuw();
tmp<scalarField> tyPlus(new scalarField(patch().size(), 0.0));
scalarField& yPlus = tyPlus.ref();
forAll(yPlus, facei)
{
scalar kappaRe = kappa_*magUp[facei]*y[facei]/nuw[facei];
scalar yp = yPlusLam_;
scalar ryPlusLam = 1.0/yp;
int iter = 0;
scalar yPlusLast = 0.0;
do
{
yPlusLast = yp;
yp = (kappaRe + yp)/(1.0 + log(E_*yp));
} while (mag(ryPlusLam*(yp - yPlusLast)) > 0.01 && ++iter < 10 );
yPlus[facei] = max(0.0, yp);
}
return tyPlus;
}
My doubt is concerning the do-while loop at the end for yPlus iteration.
Relevant answer
Answer
CFD softwares are based on numerical methods or techniques to predict the fluid behavior for various conditions e.g. LES and RANS turbulence modelling etc. Unlike exact solutions , the numerical methods involve approximations of the governing fluid parameters which cannot be evaluated at once and thus need iterative computational solvers.
During this process several types of errors are introduced while approximating variable property e.g round off errors ( machine precision) , truncation errors depending on the type of numerical scheme used.
However , according to the nature of fluid and it's interaction with surrounding environment , ( in your e.g yplus wall function which is measure of the fluid friction resistance near wall ) the solutions obtained through numerical schemes present a significant source of error which can interpret the fluid behavior in entirely different manner.
Therefore, the solution is often tested by repeating the process using better approximations and schemes with a focus to obtain the exactness of parameter value leading to iterations.
During iteration process , the error can amplify or reduce ( which is indicative of the stability of solution ) depending on boundary conditions used to obtain solution. So, often an error tolerance is introduced as condition in numerical algorithm to make the solution more meaningful and realistic which closely approximates the fluid behavior. In your case wall shear stress is being approximated using wall units in logarithmic boundary layer.
Once that condition is satisfied, the process stops and proceeds further by evaluating the next dependent variable and so on until complete solution is obtained.
  • asked a question related to Coding
Question
1 answer
A human can only aspire to fluency in so many different languages before mixing up words due to code switching. Thus, MAYBE those who cannot learn so many languages turn to linguistics and coding to earn money.
Relevant answer
Answer
Some ideas and associations:
You state “human can only aspire to fluency in so many different languages before mixing up words due to code switching”. I don’t know if this is true at all. On what research is it based? In my own situation, I am fluent in 5 languages and do not mix up words or get confused to which language a word belongs to.
Fluency in language corresponds to numeracy, that has been demonstrated. Children who are read a lot of stories in pre-school, for example, were no better in general subjects compared to other children, but they were better at maths later on. Both implicate logical thinking. Without logical thinking humans cannot string words into a longer narrative either.
People choosing coding or computer sciences may prefer to work individually and not in groups. That is a different dynamic, social vs solo, than proficiency at languages.
  • asked a question related to Coding
Question
1 answer
How big is the difference between what is written in many AI papers and its real code?
Relevant answer
Answer
Writing research papers and actual programming are indeed two different fields of knowledge, although they can be complementary. Here's an overview of the differences and the gap between what's written in many AI papers and the actual code:
  1. Research Paper Writing:Academic Skill: Writing research papers requires a strong grasp of academic writing conventions, including proper citations, formatting, and adherence to research methodology. Communication: Researchers must effectively communicate their ideas, theories, methodologies, and findings to a diverse audience, including fellow researchers, policymakers, and the general public. Literature Review: Researchers must conduct comprehensive literature reviews to position their work within the context of existing knowledge and identify gaps. Hypothesis and Experiments: Papers often present hypotheses, experimental design, data analysis, and statistical methods used to draw conclusions.
  2. Actual Programming:Technical Skill: Programming requires practical coding skills, including proficiency in programming languages, software development, debugging, and software engineering principles. Implementation: Turning theoretical concepts into working software, models, or applications requires a deep understanding of algorithms and data structures. Testing and Optimization: Programmers must test, optimize, and debug code to ensure it functions correctly and efficiently. Deployment: Real-world applications involve considerations such as scalability, security, and integration with existing systems.
Gap Between Papers and Code:The gap between what's written in many AI papers and the actual code can vary widely. Several factors contribute to this gap:
  1. Simplification: Research papers often simplify complex algorithms or models for the sake of clarity and brevity. Actual code implementations may need to address nuances and edge cases.
  2. Algorithmic Complexity: AI research papers may describe high-level algorithms but leave the low-level implementation details to the reader. Implementing these algorithms robustly can be challenging.
  3. Resource Availability: Researchers may have access to specialized hardware, datasets, or computational resources that are not readily available to others.
  4. Algorithm Tweaking: Researchers may experiment with various parameters and configurations, making it challenging to replicate their exact results without detailed guidance.
  5. Evolution of Research: AI research evolves rapidly, with new techniques and models emerging frequently. Papers may become outdated relatively quickly.
  6. Open-Source Efforts: Some researchers actively contribute to open-source projects, sharing code and making it more accessible and usable.
It's worth noting that efforts are being made to bridge the gap between AI research papers and code. Some conferences and journals encourage authors to publish code alongside their papers, and open-source communities play a crucial role in creating accessible implementations of research.
Overall, while writing papers and programming are distinct skill sets, they are interconnected in the field of AI research, where practical implementations often validate and extend theoretical findings. AI tools, including GPT-3, can assist in generating code or summaries of research papers, further bridging the gap between these two domains.
  • asked a question related to Coding
Question
11 answers
In experimental design, when the aim is to find an objective function or response as a function of variables, the real variables are converted to coded variables. For example, if the real variables of temperature are 30, 40, and 50 oC. The coded one can be obtained using the following equation: X=(T-Tcenter)/(Thigh-Tlow). So the coded temperatures are -0.5, 0, and 0.5.
Relevant answer
Answer
Anees Abdullah Khadom You mean you want to standardize them in some way to make them comparable? In theory there is nothing wrong with this. What you seem to do is minmax normalisation (x-min(x))/(max(x)-min(x)). This can sometimes be helpfull (sometimes) such as machine learning.
In practice (mostly) this means you lose information and the only thing you could say is X1 > X2 or similair. The results beyond the study are completely meaningless (literally) because 0.6>0.2, but so what? Often the conlcusion is that 0.6 is more important than 0.2, because it is larger. But the only thing that adds meaning to the number is the "context", which is removed. Moreover, if max temparature = 200 and min = -10 then it becomes 1 and 0. But the same accounts for 30 and 5. You remove any measure of meaning degrees celcius. Boiling point, evaporation, freezing etc. are all ignored. Can you imagine the Ideal gas law, being uniteless? Hence, the meaning is in the "unit" or "context" an how it relates beyond a study. Losing two hairs on my head per day day is not so dramatic, but losing two limbs a day, I rather vote the first. Both the numbers were 2, but only the "unit" or "context" differed.
This form of "standardisation" or "normalization" is done because we want a dichotomy it is easy. So the only goals is: IF 1.) X1>X2=T or 2.) X1>X2=F. If 1.) X2 is more imprtant than X1. I too made such mistakes, but when I critizise and basically ban my own results to the "pseudo-sciences" (for mental effect) explaining why, nobody seems to care they want these standardized monsters.
Sorry for the harsh response, but now +100 years later in aquatic ecology/liminology I try to find out what the relation is i.e. a species (repsonse) and pollution (dependent) I cannot find a single regression coefficient and even worse is the absence of any online published data. Then what have we been doing for +100 years? So, now I ask any ecologist what is the "impact" on species number, nobody knows. So on what "meaning" are our environmental laws than based? I would simply avoid such a things althogheter as much as possible and If you do publish the data online.
Best,
  • asked a question related to Coding
Question
3 answers
RENT @Annuities #Global Markets ^Ownerhip *Roles %Rules (Privacy:Transparency)
Relevant answer
Answer
JAS Claude.ai LLM (Choice: Copy & Paste)
Claude can't visit links. Responses may contain hallucination.
JA
Here is a definition of how choice-overload bias can impact the asset structuring of economic variables:
Choice-overload bias refers to the phenomenon where having too many choices or options can lead to decision paralysis, confusion, and potentially poorer decisions. This can apply to economic variables like asset structuring in a few key ways:
  • Portfolio allocation - Having too many investment options to choose from can lead to investors feeling overwhelmed and potentially allocating their assets suboptimally. They may overdiversify to try to cover all bases or get analysis paralysis and end up with a poor asset allocation.
  • Retirement plans - Employees given many options for 401k plans, funds, contribution levels, etc. may end up not participating optimally in retirement savings. The overload causes confusion and inaction.
  • Consumer purchases - Consumers faced with a huge array of product variations and add-ons may get overwhelmed and make poorer purchasing decisions. This can lead to lack of consumer optimization and poorer economic outcomes.
  • Policy options - Policymakers weighing many potential regulatory or policy options on an issue may suffer from overload bias, leading to confusion, lack of clear direction, and potentially suboptimal policy structures.
The overall result is that economic actors like investors, consumers, employees, and policymakers get overwhelmed by too many choices and end up structuring assets, plans, purchases, and policies in a more confused and suboptimal manner. Limiting options and presenting clear defaults is one way to circumvent the choice-overload bias.
Copy
right, ... can you answer the question posed please?
JA
Here is an attempt to directly answer the question of how choice-overload bias impacts asset structuring of economic variables:
The abundance of choices available when structuring assets leads to several key impacts due to choice-overload bias:
  • Portfolio fragmentation - Investors create overly complex portfolios trying to diversify across too many assets, resulting in fragmented, inefficient portfolios.
  • Lack of optimization - With an overwhelming menu of options, investors fail to rigorously identify and optimize the best asset allocation for their goals.
  • Default bias - Investors stick with defaults or make simplistic allocation decisions to cope with too much choice. This leads to suboptimal asset structure.
  • Delayed decisions - Facing too many choices, investors delay or avoid restructuring assets that may require changes. This causes asset rigidity and vulnerability.
  • Herding behavior - Investors imitate others' asset allocation rather than independently optimizing. This leads to suboptimal and potentially unstable economic asset bubbles.
  • Decision paralysis - Some investors freeze up and fail to restructure assets at all when faced with overwhelming investment options.
In summary, the availability of too much choice distorts the asset structuring process away from optimal economic decisions. Investors fail to rigorously identify optimal asset allocation and instead display inertia, simplistic heuristics, herd behavior, fragmentation, and paralysis.
  • asked a question related to Coding
Question
3 answers
I have tried the following steps, but these are giving the errors
1. !pip install pymatgen
2. !pip install mp_api
2. from mp_api.client import MPRester
with MPRester("my_api_key") as mpr:
docs = mpr.summary.search(material_ids=["mp-149", "mp-13", "mp-22526"])
And got a very long error as a result.
What am I doing wrong?
Relevant answer
Answer
Querying data from the Materials Project using its API for machine learning typically involves several steps. The Materials Project is a database of materials properties and can be a valuable resource for materials science-related ML projects. Here's a general guide on how to query data for machine learning from the Materials Project API:
1. Sign Up and Obtain an API Key:
· Go to the Materials Project website (https://materialsproject.org/).
· Sign up for an account if you don't already have one.
· Once you're logged in, navigate to your dashboard and find your API key. You will need this key to authenticate your requests to the API.
2. Choose Your Materials and Properties:
· Determine the specific materials and properties you want to use for your machine learning project. The Materials Project provides data on a wide range of materials and their properties, such as crystal structures, electronic properties, thermodynamic properties, and more.
3. Construct API Queries:
· Use the Materials Project API to construct queries that retrieve the data you need. The API provides various endpoints for different types of data.
· You can use the pymatgen library in Python, which is designed for materials science computations and integrates seamlessly with the Materials Project API.
4. Retrieve and Preprocess Data:
· Once you've constructed your queries, retrieve the data from the API. Depending on your specific ML task, you may need to preprocess the data to suit your needs. This could include cleaning, feature engineering, and data transformation.
5. Training and Evaluating Your ML Model:
· Use the retrieved and preprocessed data to train and evaluate your machine-learning model. Depending on your project's goals, you may apply various ML algorithms and techniques.
6. Iterate and Optimize:
· Iterate on your ML model and data selection to improve its performance.
· Consider experimenting with different materials and properties to achieve better results.
7. Citation and Compliance:
· Ensure you follow the Materials Project's terms of use and citation guidelines. Properly attribute the data you use in your research and publications.
8. Deployment (if applicable):
· If your ML model has practical applications, deploy it to your desired platform or integrate it into your materials science workflow.
Remember that the Materials Project API may have rate limits and usage restrictions, so be sure to check their documentation for any updates or limitations before making extensive use of their services.
  • asked a question related to Coding
Question
11 answers
The Ricci tensor assumes the role of helping us understand curvature. Within my Universal Theory research, the Ricci tensor unveils itself. I was pleased to find as detailed in my research document on the Grand Unified Theory Framework (of which advancements in technology are showing there may be more than one viable form of as science progresses)that the Ricci Tensor was typically vanishing to zero in relation to the schwarzschild metric as it should back when I was performing feasibility and speciousness checks via calculations with other experts and myself. But in practical applications of the Grand Unified Theory Framework, vanishing to zero unravels very intriguing consequences.
One of said consequences was something small and interesting I wanted to discuss. The purpose is to highlight the intricacies of implementing such a highly comprehensive concepts in practical settings such as code. To thus detail the challenges researchers may face when translating comprehensive physics and mathematics formulations into concrete applications. More often than not I have found it requiring innovative adaptations and problem-solving. I also want to hear if anyone has any experience with similar things and what their experience was.
My recent amd past ventures into authenticating the Universal Theory framework in code but also writing complex neural networking and AI code with it, as well as Quantum computing code had a lot of interesting hurdles. I immersed myself in the depths of this then encountered a peculiar happenstance. The vanishing of the Ricci tensor to zero in the code procceses. I didn't realize why a lot of the code wasn't working. It's because I was trying to run iterative artificial learning code. And since it incorporated the Universal Theory, and did so in a mathematically accurate way (also authenticating it in various ways via code this way is possible) I didn't realize that no matter what I did the code would never work with the full form of the theory, because the Ricci tensor would always vanish to zero in terms of the schwarzschild metric within the subsequent processes running off initial code. And while this was validating for my theory it was equally frustrating to realize it may be a massive hurdle to institutingnit in code.
This unexpected twist threw me into a world where certain possibilities seemed to evaporate into the ether. The task of setting values for the tensor g_ij (the einstein tensor form utilized in the Grand Unified Theory Framework) in code had to demand a lot of intricate modifications.
I found myself utterly lost. I thought the code was specious. Before I thought to check the ricci tensor calculations, Christoffel and Riemann formations and got it running. I think it's quite scary in a way that someone could have similar code with my own or another form of Unified Theory but if they didn't have THAT sufficient of knowledge on relativity, they may never know the code worked. I feel few have attempted to embrace the tangible variations of complex frameworks within code. I wanted to share this because I thought it was interesting as an example of multidisciplinary science. Coding and physics together is always interesting and there isn't a whole lot of support or information for people venturing into these waters sometimes.
I would like to know what everyone thinks of multidisciplinary issues such as this as well, wherein one may entirely miss valuable data by not knowing what to look for, and how that may affect final results and calculations of research and experimentation. In this situation, ultimately I had to employ some of the concepts in my research document to arrive at the Ricci tensor without any formations of Christoffel or Riemann symbols in the subsequent processes. I thought that was interesting from a physics and coding perspective too. Because I never would've know how to parse this code to get it functioning without knowledge of relativity.
Relevant answer
Answer
Also I'm not sure where you are looking, but I suppose I WAS wrong. It's actually the top 1 percent relative to its release and 82 percent overall, and the stats are the same for my profile. I've attached the picture. Leo Lazauskas Leo Lazauskas
  • asked a question related to Coding
Question
1 answer
I've made a simple simulation on Simulink consisting on a PV panel, dc-dc converter, resistive load and a function block acting as a MPPT running the code for grey wolf algortihm
When I increase the solar radiance, the current from the panel (2nd graph) starts fluctuating. When i decrease the the solar radiance, the voltage from the panel (4th graph) starts to fluctuate. Further more, if the change the value of the load resistor to 100 ohms from 10 ohms, both the current and voltage goes crazy.
I have no idea what is causing this and was hoping someone could point out my mistake. I have attached the pictures below along with the simulink file. I am willing to provide more information if needed.
Relevant answer
Answer
You need to optimize your duty cycle parameters appropriately. You can refer to this paper:
  • asked a question related to Coding
Question
3 answers
I am curious about the technology behind plant identifier apps. How are they coded? Is there already an app that can differentiate all the species in one frame? For example, it can id each species in a bouquet of flowers?
Relevant answer
Answer
Plant identifier apps typically use a combination of computer vision and machine learning techniques to identify plant species. The coding involves creating a database of plant images with associated labels, training a deep learning model using this data to recognize different plant features and patterns, and then implementing algorithms to process input images and compare them with the trained model to make predictions about the species. While there are advanced plant identifier apps available that can identify a wide range of species from a single image, accurately differentiating multiple species within a bouquet of flowers in a single frame may still be challenging due to overlapping plant parts and variations in lighting and angles. However, ongoing advancements in computer vision and machine learning continue to improve the capabilities of these apps.
  • asked a question related to Coding
Question
3 answers
Has anyone got suggestions for types of qualitative analysis that combine inductive and deductive approaches? For a project I'm working on, we identified a relevant pre-existing framework about accessibility maturity with 5 key areas. We've completed semi-structured interviews about strengths and barriers in these 5 areas, and are now analysing the data. I'd like to focus on findings from each of these five areas as the main sections in our report, but also code inductively within each of them to discuss the emergent patterns we've found. Any suggestions for an analysis approach that might suit? I've looked at thematic analysis, content analysis, and framework analysis but none of these seem to fit as we're beginning with a pre-existing framework then coding inductively within it. Thanks for your suggestions!
Relevant answer
Answer
Gillian Mccarthy ultimately all qualitative research analysis includes inductive and deductive components. What you described would be twofold: (1)begin with a deductive analysis focused on the 5 areas from your framework. Within this process, you would want to keep your mind open to include ideas that do not align with the framework--that is, if an idea keeps coming up in the transcripts but does not fit in one of the 5 areas, then I recommend putting these data in a separate 'category' or code depending on how you are capturing your analyses processes. (2) your second step is to identify the patterns within each of the 5 areas. In this step, you are combining data that are similar in meaning. The goal is to combine the data until you get to between 2 and 5 groups. You would also do this step for any group of data that emerges outside of the framework.
See the documents attached. I hope this helps.
Veronica
  • asked a question related to Coding
Question
6 answers
Hello, I'm using the Piecewise Linear Function in origin to fit my data, which should contain 9 segments. According to the method from originlab http://www.originlab.com/doc/Tutorials/Fitting-Piecewise-Linear, I change my codes, the codes can be compiled, but when I use it to fit my data, just get 2 segments. Here paste my codes, hope you can give me some comments or ideas. Thanks a lot.
Relevant answer
Answer
This video might be helpful
  • asked a question related to Coding
Question
3 answers
Lyapunov exponent of coupled map lattice having one dimensional local map is known. But what is the theory behind the Lyapunov exponent of cml having 2D local map?
Relevant answer
Answer
I think that the key technique is finding Jacobian Matirx of the two dimensional CML .
  • asked a question related to Coding
Question
1 answer
I'm interested in applying Non-equilibrium Thermodynamcis for Glassy Polymer (NET-GP) [1] framework to Statistical Associating Fluid Theory (SAFT) variations. Although the NE-SAFT models were reported multiple times in the literature [2], none of them explained how to do this starting from equilibrium SAFT codes/programmes (such as Matlab, python). The papers generally just write "determined by numerical method" in MATLAB, which doesn't offer too much insights.
The biggest issue (that I can identify) is conventional equilibrium SAFT programmes takes temperature (T) and pressure (P) as independent variables, whereas in the NET-GP framework, the independent variables are instead be temperature (T) and another volume(V)-dependent variables (such as polymer volume or polymer density).
Given this information, how should I modify a conventional SAFT code to produce NE-SAFT? Is there a quick work around this (T,V) dependency? Or, would the only way be rewriting all SAFT equations to take (T,V) as indepedent variables?
Relevant answer
Answer
Louis Nguyen To obtain Non-equilibrium (NE)-SAFT from an existing equilibrium SAFT code, various changes must be made to the code.
To begin, you would need to adjust the equations used in the SAFT model to account for the polymer volume (or polymer density) dependency on temperature and pressure. This would very certainly necessitate some adjustments to the mathematical form of the equations.
Second, you'd need to change the code's input such that temperature and volume-dependent variables may be provided as independent variables rather than temperature and pressure. This would very certainly necessitate changes to the input procedures and input storage.
Finally, the output routines would need to be modified to allow for the computation of thermodynamic parameters important to the NE-SAFT model, such as the equation of state, heat capacity, and internal energy.
It may be feasible to discover a faster workaround, but it is probable that making these changes to the current code would be easier and easier than trying to find a workaround. Changing the code directly ensures that the result is compatible with the NE-SAFT model.
It's also worth noting that these revisions will very certainly necessitate a solid grasp of the mathematical underpinnings of the SAFT model, as well as the alterations required to achieve the NE-SAFT model.
  • asked a question related to Coding
Question
2 answers
Is there any computational tool/software/program code to convert/map illumina probe set IDs to gene IDs for a large data file...
Relevant answer
Answer
library("illuminaHumanv4.db")
probeID=c("ILMN_1690170", "ILMN_2410826", "ILMN_1675640", "ILMN_1801246", "ILMN_1658247", "ILMN_1740938", "ILMN_1657871", "ILMN_1769520", "ILMN_1778401") data.frame(Gene=unlist(mget(x = probeID,envir = illuminaHumanv4SYMBOL)))
  • asked a question related to Coding
Question
4 answers
Let's say I have two dimensions:
Dimension 1: 2 categories
Dimension 2: 4 categories
If I were to code information based on these dimensions and I wanted to compare the agreement (e.g., dimension 1 = 90% vs. dimension 2 = 70%), considering the difference of categories in each one, is there a statistic that weights by the number of categories?
Would it be more appropriate to dichotomise the categories of dimension 2 to calculate agreement? I mean calculating the agreement for each category of dimension 2 separately, so that it is "category 1 = yes or category 1 = no", "category 2 = yes or category 2 = no", etc., instead of calculating it for all 4 categories together. This way you would calculate the agreement for each category in dimension 2 separately, but the results could be compared with the results for dimension 1.
I would also appreciate bibliography on the topic.
Thank you!
Relevant answer
Answer
I would suggest that you use the 2 category variable as the DV in a binary logistic regression. Use the remaining variables as predictor variables in your regression. This approach should yield the maximum amount of information in your data. I am attaching some course notes that cover the topic. There many more with examples that you can find with a Google search. Best wishes David Booth
  • asked a question related to Coding
Question
8 answers
May I have the MATLAB code of some well-known multi-objective benchmark functions like Schaffer, Fonseca, ZDT1, ZDT6, Srinivas, DTLZ5, DTLZ6, LZ09_F1, LZ09_F2, LZ09_F6, LZ09_F7, LZ09_F9, WFG4, CF1, CF2, CF4, CF5, CF6, CF9, and CF10?
Relevant answer
Answer
  • asked a question related to Coding
Question
8 answers
When I try to perform the following calculation, Python gives the wrong answer.
2*(1.1-0.2)/(2-0.2)-1
I have attached a photo of the answer.
Relevant answer
Answer
Mathematically, the answer to the equation is zero; the answer Python spat out is pretty much as close as you can get to the representation of zero with a typical computer.
This is a classic floating point problem: https://en.wikipedia.org/wiki/Floating-point_error_mitigation
  • asked a question related to Coding
Question
5 answers
Dear all,
I'm just starting to teach myself how to make macros to process confocal images in ImageJ (FIJI). I've managed to make a macro that does most of my processing using the record macro function (it has already saved me an enormous amount of time!), however I'd like to add brightness and contrast adjustment to the processing.
My current process involves:
1. Splitting the multi-channel images
2. Applying grey LUT to all
3. Saving all in a temporary folder
4. Creating a merged image of the channels is various colours
5. Creating a panel with split and merged images
6. Adding a scale bar
7. Saving and closing all windows
However, when I look at the recorded code produced by the brightness and contrast adjustment dialogue, I get code which doesn't seem right to me:
//run("Brightness/Contrast...");
run("Apply LUT");
I've also tried the following code, however it automatically maximises the contrast, where I'd like to use the brightness and contrast adjustment to reduce background in some channels.
run("Enhance Contrast", "saturated=0.35");
Is there anyone else using macros for brightness and contrast adjustment in ImageJ?
Many thanks,
Sam
Relevant answer
Answer
You can set the min and max, for example:
setMinAndMax(2, 160);
The waitForUser(""); command will introduce a pause in the script until you click ok. You can also add a message as an argument e.g., waitForUser("manually adjust the brightness and contrast");
  • asked a question related to Coding
Question
2 answers
I am presently writing a python script DFT based microkinetic model for a biomass project I am working on. At the moment, I needed to calculate the DRC but I am having the challenge writing the appropriate code in python. I will be very grateful if anyone can assist me get pass through this stage.
I look forward to your kind response.
Many thanks,
Shed!
Relevant answer
Answer
Thanks for your suggestion Mansurbek Urol ugli Abdullaev
Actually I started with the CATMAP program, but it was not giving me the results I expected, perhaps it has limitations. So, I decided to prepare my own script with python code.
  • asked a question related to Coding
Question
2 answers
I used "sf" package in Rstudio to label country and state name but I always get the same error message when I tried to run the st_centroid command.
> st_centroid(world, of_largest= FALSE)
Error in wk_handle.wk_wkb(wkb, s2_geography_writer(oriented = oriented, :
Loop 96 is not valid: Edge 743 crosses edge 998
In addition: Warning message:
In st_centroid.sf(world, of_largest = FALSE) :
st_centroid assumes attributes are constant over geometries of x
Relevant answer
Answer
Dear Mohd,
there is an easy solution: Use st_point_on_surface() instead of st_centroid. It guarantees that the point you create is in the polygon. For a more comprehensive overview take a look here: https://geocompr.robinlovelace.net/geometric-operations.html
Cheers
Philipp
  • asked a question related to Coding
Question
2 answers
I want to write a descriptor for calculating interactions between a protein-peptide complex, then create a NN for docking.
I found some codes but it's for protein-ligand complex.
Relevant answer
Answer
  • asked a question related to Coding
Question
1 answer
Hi there
I used Atlas.ti back in the days, and I remember that after you made a code, the text would highlight itself in the color connected to the code. With this you could easily see what you have coded/not coded.
Now with Atlas.ti 8 it just makes this small black line at the end of the sentence which is quite hard to see.
Can I highlight the text with the same color as the code?
Thank you!
Relevant answer
Answer
This is the response I got from their support team:
You can highlight text by creating quotations (https://urldefense.com/v3/__https://doc.atlasti.com/ManualMac.v22/Quotations/QuotationsTextCreating.html__;!!PAKc-5URQlI!9cNJGlVs-jOAmniAih5RkQDCckYHj8MVvqYI49uPEtip3NW8JtnsewbdPViiQnjuTsUUZSAfXYLeeJQI1UxIYuq3KA$) in ATLAS.ti 22. You can also edit text(https://doc.atlasti.com/ManualMac.v22/Documents/DocumentsManaging.html?highlight=edit%20text#editing-documents) documents in case you want to add colors, bold formatting, etc.
  • asked a question related to Coding
Question
4 answers
I am trying to investigate the impact of multiple technology adoptions on production costs using seven years (2015–2021) of panel data. My sample size is 405. The sample is separated into two groups: the treatment group and the control group.I would like to use the difference in difference (DD or DID) model to access the impact. My coding includes "0 = non-adopter, 1 = partial adopter, and 2 = full adopter".
At present, I am struggling with coding issues, as in the years 2015 and 2016, the treatment group adopted only one technology, and from 2017 to 2021, they adopted both technologies.
YEAR Tech A. Tech B BOTH
2015. 0 1 ?
2016. 0 1 ?
2017. 1 1 2
2018. 1 1 2
2019. 1 1 2
2020. 1 1 2
2021. 1 1 2
I would kindly like to know whether I should code 1 or code 2 for 2015 and 2016?
Thank you very much.
Relevant answer
Answer
Thank you very much Dr. proloy
  • asked a question related to Coding
Question
2 answers
THERMODYNAMIC ANALYSIS OF TRIPLE EFFECT ABSORPTION CHiLLERS
Energy and Exergy analysis .
Relevant answer
Answer
interested
  • asked a question related to Coding
Question
2 answers
I have heard that the new BME688 sensor can be used to detect Volatile Organic Compounds (VOCs), volatile sulfur compounds (VSCs), and other gases such as carbon monoxide and hydrogen. However, we could not really understand how to interface this sensor and which codes to use in order to obtain these data. Presently, we know of codes that can help us find out Air Quality Index but that is limited to temperature, pressure, and humidity. If anyone has researched about it or knows a way to obtain the other data using Raspberry Pi, please let me know! Thanks!
Relevant answer
Answer
  • asked a question related to Coding
Question
4 answers
Hello
I am going to purse a master in sustainable chemical engineering this year. I want to ask you what IT skills do you think that I will need to? For having a better vision, I attach my Curriculum.
Relevant answer
Answer
I am not sure you have the right question or problem statement in your head. I tend to agree with Prem Baboo. I myself can tell you I am not much of a programmer. It is a tool to me not a career. Python is something being used a lot, but are you going into a programing career or a chemical engineer career? That would not be the same thing. When I think chemical engineering I do not think IT. I think problem solving, working in a team on projects, etc. As a sustainable chemical engineer what would you be doing? That's kind of vague. I think as a chemical engineer you need to be flexible with the changing demands. I myself have been in one company but I know other that have moved around from company to company and role to role. I think you need to figure out more what you want to do or would be happy doing. School will have failed to teach you everything you will need to know. There is no way your college classes could ever do so. You gain knowledge and needed skills over time based on what you are working on.
Generically speaking, be flexible and no one set of IT skills is going to set you up. Each problem, task... will may need a different approach. If I don't know how to do something I would ask someone that does or team with someone that knows a skill I don't have and instead of taking a few months to learning a new still I might be better off teaming with someone that has the skills. Its called teamwork and at a company both of you working together for the betterment of the company as a whole. Does your company put more value on individual accomplishment or team solutions? You don't need to know everything. But having a nice network of contacts and collogues that can help you will take your needs. They in return can utilize your skills and expertise with problems they have as well.
  • asked a question related to Coding
Question
5 answers
Hi everyone!
Has anyone tried to use this antibody for immunofluorescence in brain slides? My problem is that I've done a double immunofluorescence for Iba1 (WAKO) and TREM2 in rat brain slides, but the labels don't colocalize. The TREM2 antibody seems to label cells that are Iba1-negative (I've attached a confocal picture of the IHQ showing both labels).
I hope somebody can help me!
Thank you in advance!
Relevant answer
Answer
All the best :-)
  • asked a question related to Coding
Question
3 answers
Please, when calculating an dihydrocresol isomere adsorption at PBE+TSHI in VASP code on solid metallic surface, the Job ended with this error message: some force-field parameter for the following atom is not defined: H 11 H 16? I am really looking for a way to overcome it.
Relevant answer
Answer
Update: re-running the calculation with identical parameters helped - it converged as usual, no error appeared this time. It seems that this is some kind of bug which only appears occasionally (at least I have only experienced it once in more than 10 calculations using IVDW = 2).
  • asked a question related to Coding
Question
8 answers
I am currently working on the identification of the parameters of a solar panel (5P) using (PSO and ABC) algorithms. But unfortunately I find it hard to code these algorithms. Please is what you can send me the matlab codes of the PSO and ABC use to the parameter identification, thank you.
Relevant answer
  • asked a question related to Coding
Question
3 answers
I have developed some codes which is given vaguely below:
bigFunction( ){
Fun1( ){
}
Fun2( ){
}
Fun3( ){
}
}
i = 1:2000
a = sapply(i, function(i) bigFunction( ))
This bigFunction gives 2000 values
a1 = mean(a) which gives 1 value
Now I would like to change an input of bigFunction, which then gives another 2000 values and find
a2 = mean(a).
This will happen over and over again for a total of 100 times, and if possible the changes in the parameter values happen simultaneously so that at the end, I will have a1, a2, a3, ..., a100.
Relevant answer
Answer
a <- replicate(1000, mean(replicate(1000, BigFunction())))
  • asked a question related to Coding
Question
3 answers
Dear Colleagues
I am using a UMAT for modeling the hysteresis behavior of a Material.
It is a simple UMAT which modify the elastic modulus of the material at different strains in order to reproduce the hysteresis behavior.
I need to convert it into a VUMAT for Explicit Dynamic simulation in ABAQUS.
What should I change in My code?
I have attached my code.
Relevant answer
Answer
Code will be almost the same, only you need to change the function and variables as per VUMAT and there is no need for Jacobina calculation in VUMAT, after this change, I think it should work.
  • asked a question related to Coding
Question
4 answers
How can I generate all probability of non repeated combinations of .mat files?
For example there are the following mat files : One.mat , Two.mat , Three.mat
and each of these files contains 2 vector for example: One.mat contains V1 and V2 vectors, Two.mat: contains C1 and C2 vectors , Three.mat: contains D1 and D2 vectors.
So I want to generate the following combinations:
Comb1.mat contains: One.mat and Two.mat
Comb2.mat contains: One.mat and Two.mat
Comb3.mat contains: Two.mat and Three.mat
Comb4.mat contains: One.mat ,Two.mat and Three.mat
Is there any MATLAB code can help me to achieve this?
Note: I don't want to do this manually because there are many groups of mat files and I want to make a combinations of them and this will take time.
Relevant answer
You sent ma a notification that you solved the problem.
May I propose a solution?
The solution is as follows:
Assume that you have N files
Then you want to pare them uniquely
Then you can use the following formulation:
The combinations = SUM X1i with i=2 to N + SUM X2i with i= 3 to N+ SUM X3i with i=4 toN + .................+ SUM (N-1)N + SUM Xi with i from 1 to N.
Please tell me whether this solution satisfies your requirement or not.
Best wishes
  • asked a question related to Coding
Question
5 answers
Dear researchers,
I am conducting a meta-analysis using a random-effects model in which some studies have a single effect size, while other studies have 2 or more effect sizes (2 or more subgroups). For example, study A has OR for overweight and obesity combined, while study B has separate results for overweight and obesity. These are my studies (fictional):
A overweight+obesity OR=1.4
B overweight OR=1.1
B obesity OR=1.8
When conducting a meta-analysis on such studies it is advisable to combine B overweight and B obesity using a fixed-effects model (since they have the same control group) and then to perform a meta-analysis using a random-effects model (i.e. conducting meta-analysis using the study as a unit of analysis). Can anyone please tell me how to perform that in R?
Relevant answer
Answer
  • asked a question related to Coding
Question
4 answers
I am running this code (below) to add error bars onto my bar chart, i am trying to to work out the standard deviation (sd) of shannons diversity index
The code works up until the second/third last line
geom_errorbar(aes(ymin=shannon-sd, ymax=shannon+sd), width=0.2,
position=position_dodge(0.9))
This error code keeps coming up
"Error in shannon - sd : non-numeric argument to binary operator"
but this is my data...
Exposure genus shannon sd
Exposed Crustacean 0.000000 0.00000000
Exposed Mollusc 1.199625 0.13291129
Exposed Seaweed 1.513125 0.42093822
Sheltered Crustacean 0.025500 0.07212489
Sheltered Mollusc 1.156750 0.26763341
Sheltered Seaweed 1.848125 0.27264128
anyone know where im going wrong?
library(ggplot2)
#+++++++++++++++++++++++++
# Function to calculate the mean and the standard deviation
# for each group
#+++++++++++++++++++++++++
# data : a data frame
# varname : the name of a column containing the variable
#to be summariezed
# groupnames : vector of column names to be used as
# grouping variables
data_summary <- function(data, varname, groupnames){
require(plyr)
summary_func <- function(x, col){
c(mean = mean(x[[col]], na.rm=TRUE),
sd = sd(x[[col]], na.rm=TRUE))
}
data_sum<-ddply(data, groupnames, .fun=summary_func,
varname)
data_sum <- rename(data_sum, c("mean" = varname))
return(data_sum)
}
df3 <- data_summary(diversity, varname="shannon",
groupnames=c("Exposure", "genus"))
# Convert dose to a factor variable
df3$genus=as.factor(df3$genus)
head(df3)
p <- ggplot(diversity, aes(x=genus, y=shannon, fill = location)) +
geom_bar(stat="identity", width = 0.5, position=position_dodge()) + theme_minimal() +
geom_errorbar(aes(ymin=shannon-sd, ymax=shannon+sd), width=0.2,
position=position_dodge(0.9))
p + scale_fill_brewer(palette="Paired") + theme_minimal()
Relevant answer
Answer
Could you read the data into a very simple program, possibly multiplying the numbers to see if you get the error because the file is corrupted a bit. You could also move the lines around before running the program to see where it has the problem. As Liang Chen suggests, it seems like a simple problem with the file.
  • asked a question related to Coding
Question
7 answers
Is there a comprehensive tool for analyzing Scratch projects?
Relevant answer
Answer
God bless you all for recommending,reading and citing my research work,i assure you all that i would recommend,read and cite any work on my researchers on researchgate.
  • asked a question related to Coding
Question
18 answers
What is the most flexible, user-friendly, and easy to update presentation tool for online teaching that can be used by academics with different fields of study?
Teaching interdisciplinary courses needs collaboration with different people with different brain structures. For instance, someone with excellent coding skills may not be able to work on graphical presentations efficiently. On the other hand, another can collaborate with them with an appropriate demonstration and imagination but, may not have a coherent knowledge of coding. How can they cooperate for a better teaching method together?
Relevant answer
Answer
There is no one-fit-for-all formula. Visuals and texts are good for theory based subjects. But good old verbal discussions are better for subjects like language teaching and communication classes.
  • asked a question related to Coding
Question
5 answers
How can you share the programs and codes-like MATLAB code- which you have used in your paper. so that you can reference them scientifically? is there any scientific journal to do it for free?
Relevant answer
Answer
Mostafa Shabani you can share your coding using GitHub.com. You need to get an account there then upload your code publicly
  • asked a question related to Coding
Question
3 answers
Hello everyone, I’m wondering if anyone can help me in using the finite element analysis code “FEMIX.4”. As i want to simulate the non-linear behaviour of concrete utilizing 3D multi directional fixed smeared crack model using FEMIX. And i’m not very familiar with such computer codes.
thanks in advance.
Relevant answer
Answer
pleasure Mohamed Sherif
Stay Happy Stay Healthy.
  • asked a question related to Coding
Question
4 answers
I have a large database of posts (Yaks) collected from the social media app Yik Yak (back when it was still active). I used a program to collect Yaks at random intervals from 50 randomly selected universities, stratified by US Region, "Locale" (rural vs urban), "Control" (private vs. public), and Size (large vs not large). We collected over 115,000 Yaks. My students used NVivo 11 Pro to code the data (in this case, we were looking at substance use related posts), and we ended up coding 1670 Yaks as into our Substance Use "Node." Now, I want to be able to view (and export) all 1670 of those Substance Use references, along with their associated attributes (region, locale, control, and size). We have been trying for days and can't figure out how to do this. It seems like a basic function that should be easy. Can anyone offer any help?
We want to export the data in order to run various chi-square and regression analyses, and I want the whole data set (all 1670) and not just the summary data (i.e. summed frequencies of substance use Yaks for each region). This way, the appropriate attribute values will be connected to each specific data point during the quantitative analyses.
Relevant answer
Answer
Hi Jason, I realize that you asked this question 4 years ago. Did you get an answer somewhere? I have the same problem/question and I also cannot figure it out. I agree that it does seem like a basic function that should exist (somewhere).
  • asked a question related to Coding
Question
6 answers
Hello everyone,
I'm trying to model the motion of a cylinder in both CF and IL directions, but the UDF code that I have only can move the cylinder in the CF direction. Sadly I have never had the chance to learn to code UDF. Can someone please help me?
I will share the code below.
I really appreciate any help you can provide or at least promote this question to find the best answer to my issue.
FYI, this code is for Vortex-Induced Vibration of cylinders
####################################################
#include "udf.h"
#include "dynamesh_tools.h"
#include "unsteady.h"
FILE *fout;
static real v_prev;
DEFINE_SDOF_PROPERTIES(stage,prop,dt,time,dtime)
{
Thread *t;
Domain *d=Get_Domain(1);
real x_cg[3],force[2],moment[3];
real cg; /*Center of gravity position*/
real vel; /*Cylinder velocity*/
real Fy; /*Lift Force*/
real mass=8.88; /*Cylinder mass*/
real fn=1.06; /*System frequancy*/
real wn=2*M_PI*fn; /*System angular velocity*/
real z=0.002; /*Damping coefficeint*/
real k=mass*wn*wn; /*System stiffness*/
real c=2*mass*wn*z; /*System damping*/
/*Get the thread pointer for this which motion is define*/
t=DT_THREAD(dt);
prop[SDOF_MASS]=8.88; /*System mass of the 6DFOF*/
prop[SDOF_ZERO_TRANS_X]=TRUE;
prop[SDOF_ZERO_TRANS_Z]=TRUE;
prop[SDOF_ZERO_ROT_X]=TRUE;
prop[SDOF_ZERO_ROT_Y]=TRUE;
prop[SDOF_ZERO_ROT_Z]=TRUE;
cg=DT_CG(dt)[1];
vel=DT_VEL_CG(dt)[1];
prop[SDOF_LOAD_F_Y]=-k*cg-vel*c;
fout=fopen("results.txt", "a"); /*Open file*/
/*Wrtie data into file*/
fprintf(fout, "%g %g %g\n",time,DT_CG(dt)[1],DT_VEL_CG(dt)[1]);
fclose(fout);
}
Relevant answer
  • asked a question related to Coding
Question
2 answers
Hello;
I am currently working on a UDFusing the DEFINE_HET_RXN_RATE for Heterogeneous reactions. My idea is to use VOF to describe a simple method of VLE using the vapor pressure of condensable components (in my case water and methanol). I am using the Evaporation-Condensation UDF provided by Ansys in their manual UDF book as a baseline.
By now, I am having trouble with filtering the rate of the reaction based on their names. I was trying to use STREQ and !strcmp command, but every time I compile it gave me the following message:
  • error: member reference base type 'real' (aka 'double') is not a structure or union
Besides a problem with an exponent where its values depend on the temperature, it is the only problem I have so I can test the UDF.
  • asked a question related to Coding
Question
3 answers
Hi.
I am trying to plot Euler's diagram with counts and percentages of counts related to the total cohort (n=1615). I have written the following code:
plot(euler(c( "LUTS"=98, "Symptoms of UTI"=46, "Positive urine culture"=39, "Symptoms of UTI&LUTS"=33, "LUTS&Positive urine culture"=24, "Symptoms of UTI&Positive urine culture"=22, "Symptoms of UTI&LUTS&Positive urine culture"=18), input = "union", shape = "ellipse"), key = TRUE, counts = TRUE, quantities = list(type = c("counts", "percent"), font=3, round=2, cex=0.8), fills =list(fill=c(viridis::plasma(n = 3))), alpha = 0.3, c("#1957FF", "#FF750C", "#FF220C"), alpha = 0.3, edges=list(lty = 1), factor_names = TRUE, labels=list(font=2, cex=1), legend = FALSE)
and expectedly have got the graph attached.
My question is how can one replace the percentages with custom ones, e.g. "59 (3.65%)" instead of "59 (48%)"? Thank you in advance.
Relevant answer
Answer
I concur with Mahmoud
  • asked a question related to Coding
Question
4 answers
I recently started working on a meta-analysis involving 73 studies, however when the forest plot diagram is made, i am unable to zoom out of it to view my result. the only way i can view it is if i rotate my screen to male it be in portrait mode.
is there any settings or packages on R which allows me to do so? I have tried the zoom package but it didnt work.
thank you.
Relevant answer
Answer
I concur with Arash
  • asked a question related to Coding
Question
8 answers
I have multiple files in a folder which has similar names but with different numerical labelling. For example, the files are named as phi=1_theta=1, phi=1_theta=2, phi=2_theta=1 and so on. The labelling of the file is attached in this question as well. How to write a suitable Fortran code to open these files using loop?
There are overall 16 phis' x 16 thetas' files in the folder.
Relevant answer
Answer
I had the same problem but if you close each file after opening
then it will read all files
  • asked a question related to Coding
Question
18 answers
I work on spectral method for solving ODE and PDE. If you have any equation to solve, send me the govening equation, the boundary conditions and the initial conditions and I can propose a matlab code to solve these equations.
Best regards.
Relevant answer
Answer
Dear Slim,
I have an equation, as shown in the picture in the attachment. I want to know its eigenvalues. I did it with spectral methods in MATLAB, but I don't know if I'm right. Can you help me calculate this eigenvalue?
Thank you very much!
  • asked a question related to Coding
Question
1 answer
To study the bituminous mastic behavior, a test called stress relaxation is done it shows the solid or fluid-like behavior of mastic. By generalized maxwell model, one can predict the relaxation of mastic. How to enter a constitutive equation of model in the coding.
  • asked a question related to Coding
Question
5 answers
I am trying to map NO2 using the NASA nc files using spyder (python). These files contain the scientific data sets (sds) and can be converted to CVS format and opened with excel. The excel files then contain data for the quantity of that specific pollutant, in this case NO2 with the format e.g. 9.97E-36. How do I map this? Do I convert it into python file first? because those can be mapped easily. But would that require coding? Please help.
Relevant answer
Answer
Adrien Wehrlé I will try using this and then let you know.
  • asked a question related to Coding
Question
4 answers
I have performed all the attack for my image cryptography algorithm. finally i need to test NIST results for my cryptography algorithm. if any one have the code kindly share the code. please do the needful
Relevant answer
Answer
Actually NIST Test suite consists of a bundle tests. Hence, you need not write code for all these tests to find the Randomness of the image. The code for all these tests are given in NIST web site. You need to download the code and run using Eclipse or any other IDE. It will be simple and useful. Only thing you should know that the procedure to the run the different tests.
  • asked a question related to Coding
Question
11 answers
We have feedback from participants. We have created codes. We subcategorized them. Instead of fully categorizing and creating themes, we want to put the codes under the related theme to answer our research questions. These themes include our research purpose what we want to investigate.
I wonder what approach does this corresponds to?
Deductive approach maybe?
Thank you in advance!
Relevant answer
Answer
The first data analytic phase of GT begins with the coding of the data. This is undertaken to conceptualize the data by discovering categories into which they fit. The coding process has three phases: open coding, axial coding, and selective coding. In open coding, researchers describe the data by looking at it line-by-line. This strategy of focusing on small units of data, and their interpretation, encourages the development of a theoretical sensitivity to new ideas with regard to the data, and helps prevent the forcing of data into existing categories. Most researchers maintain that when a full array of categories have been identified, one should undertake axial coding – whereby one puts the data back together again in new ways by making connections between the numerous categories. Following that, a selective coding step is implemented in which the researcher looks to systematically identify those categories that relate closely to the core category. The core category lies at the heart of the emerging theory and is central to its integration. https://www.sciencedirect.com/topics/neuroscience/grounded-theory
  • asked a question related to Coding
Question
11 answers
Hi, I am planning to collect matching data from employees and their supervisors about some organiational variables via online survey. I have studied researchers assign some codes to group the data togather. Can anyone please explain how to assign the codes or how to collect this kind of grouped data online? Sorry, I am going to do this for the first time. Thanks.
Relevant answer
Answer
usually more than one employee is reporting to a supervisor in a department.In that case you may use coding scheme according to number of suervisors and reporting employees.For example S1 for supervisor 1 and E1a, E1b,E1c ect for employees under S1 supervision
  • asked a question related to Coding
Question
7 answers
Hello everyone
So I'm using Jupyter for my python programming and coding, but some of the functions and methods aren't working properly, and some don't even work.
What version of Jupyter and Anaconda should I use? Or even should I start using Anaconda2 instead of Anaconda3?
ThankYou in advance for responding to my question
Relevant answer
Answer
Hi
Anaconda 3 and python 3
  • asked a question related to Coding
Question
21 answers
Hello,
I have written a UMAT subroutine but when I run the code it runs and completes successfully. I am using C3D8 elements. The subroutine is working fine, as the job completes successfully. However, when I try to see the results from the ODB file I get the following message error:
“The selected Primary Variable is not available in the current frame for any elements in the current display group.”
Could this be because there is something wrong in my subroutine? I will appreciate your input. Please see the UMAT file attached.
I have tried to select the stress, strain and magnitude and all not available.
I have tried to run the input file with an in-built material model (say, linear elastic) and I did not encounter the same issue. Everything was OK.
Relevant answer
Answer
In your UMAT subroutine, replace include 'aba_param.inc'
with
implicit real*4(a-h,o-z) (for single precision)
or
implicit real*8(a-h,o-z) (for double precision)
Be careful, the above statement indicates that all the variables starting with a-h or o-z will be treated as real variables.
  • asked a question related to Coding
Question
5 answers
Can someone provide me with the R code for Network Analysis where I can establish the relationships among the variables using the underlying concepts of SNA using the data from Social Media. I have been going through the codes on Stackflow and GitHub but the machine time and processing time is very high when working on data extracted from Social Media.
Your help will be acknowledged by mentioning your name in the published manuscript on this work.
Relevant answer
Answer
That means, you "just" want to compute nodes' centrality and relate that to some other node-level characteristics? Density is a network-level concept, so I don't know what you want to do with that in this context.
  • asked a question related to Coding
Question
2 answers
Dear All,
I am trying to calculate the bandstructure of zigzag graphene nanoribbon by creating large two dimensional supercell. So basically, the system is constructed as an array of graphene nanoribbon in which the separation distance between the ribbons is large. The input for the kpoint in my scf calculation is chosen to be kpoint automatic. Every Nosym parameter are set to be false. 
My question is why the number of kpoint that is actuallly generated by the code is more than the input?
Ex: for kpoint 12 12 1 0 0 0 -> total kpoint is 148
      for kpoint  40 10 1 0 0 0 -> total kpoint is 604
If Im correct, because of the symmetry usually the number of kpoint will be reduced, but I dont understand why in this care the number of kpoint become larger than the input?
Thank you,
Satrio
Relevant answer
Answer
Did you solve this problem?
I am facing same problem .
  • asked a question related to Coding
Question
4 answers
Im trying to create a heatmap with a matrix of RNAseq cpm that has been filtered for low expression and sorted by variance. The matrix is as below.
The gene names in the following matrix corresponds to the row numbers of the matrix above
When I make the heatmap I get the row numbers of the first matrix instead of the gene annotations so I would like to produce a matrix that looks like the following (edited in paint, not actually from R).
Is there any way i can do this? I would require the gene names to correspond to their respective gene numbers in the 1st matrix, in reference to the 2nd matrix
i.e if the row number in matrix 1 is 14, the gene ID should be dnaK
Relevant answer
Answer
Hi Julia,
to produce the matrix of the edited img, you could use:
rownames(matrix)<-matrix[,1]
This takes the names of your V1 column and uses them as new row names. Afterwards you can delete the first column with the names
However, it might be a good idea to use a data.frame instead of a matrix.
  • asked a question related to Coding
Question
8 answers
If anyone could help me. It is my dissertation work. Thank you. I am looking for the matlab code to solve PDE using RBF.
  • asked a question related to Coding
Question
4 answers
My real system (buck converter) can only take an input of 0 to 1 (duty ratio) and I need to constrain the system so the controller action keeps within this bounds. How can I do this?
The code for the project is below in three images, it is Steve bruntons genetic algorithm coding. https://youtu.be/S5C_z1nVaSg?t=470
I assume the u variable is the duty ratio but I am not sure.
The last image is the control system attempted to be controlled. The transfer function is an adequate fit for the system.
Thanks in advance, if anymore information is desired please ask away.
Relevant answer
Answer
hi Muhammad Abbas Khan That is my post on the Matlab forums, no one has answered it.
  • asked a question related to Coding
Question
3 answers
Where can I get the code for K prototype algorithm for mixed attributes? Has anyone implemented it in any language?
Relevant answer
Answer
I recently found an implementation of kprototypes in Python.
Besides, here is a useful example of kprototypes.
  • asked a question related to Coding
Question
9 answers
Hi, I did a literature review on innovation diffusion theories (Rogers), and I attempted to test the practicality of these theories when introducing novel medical devices. I did semi-structured interviews with clinicians. So is it correct that I am using a deductive method since I am looking for existing concepts in my data? What method should I use to analyse the interviews? Thematic? Content? Thank you!
Relevant answer
Answer
Hopefully the following publications could help:
  • Attride-stirling, J. (2001) Thematic networks: an analytic tool for qualitative research, Qualitative Research, 1, 3, pp. 385-405.
  • Braun, V. and Clarke, V. (2006) Using thematic analysis in psychology, Qualitative Research in Psychology, 3, 2, pp. 77-101.
  • Braun, V. and Clarke, V. (2014) Editorial : What can ‘‘thematic analysis’’ offer health and wellbeing researchers?, International Journal of Qualitative Studies on Health and Well-being, 9, 1, pp. 1-2.
  • Joffe, H. (2012) Thematic Analysis, in Harper, D. and Thompson, A.R. (eds.) Qualitative methods in mental health and psychotherapy: A guide for students and practitioners. West Sussex, UK: John Wiley & Sons, Ltd, pp. 209-223.
  • asked a question related to Coding
Question
6 answers
What are the best QR Code Generators Software?
Relevant answer
Answer
^^ To: All 19 million scientists and researchers at RG platform!
  • asked a question related to Coding
Question
17 answers
What are the most valuable lessons you've learnt after using R?
Relevant answer
Answer
In my view, R is efficient for quick data analysis and visualization. It is also simple to expose results through an interactive web app with the package shiny.
However, I agree with Clément Poiret that Python is almost inevitable when you work in industry thanks to its versatility and the ability to respect processes of continuous integration. I've first learned coding with R at school, few years after I had to use Java and C++. This lead me to review and improve the way I coded with R. After that I've learned Python for OOP and statistical programming, this lead me to improve again my R skills. Now with few years of experience, I often switch between R and Python depending on the task. As I said firstly, I always use R and RStudio for data exploration (I found it powerful to deal with big datasets and parallel processing) and quick reports with Rmarkdown (HTML and TeX). I necessarily use Python when I need robustness for heavy project and OOP (e.g. to improve factorization and inheritance).
Finally, I would say that there ALWAYS be a rigorous way to code in R, but it is not mandatory and the language do authorize bad practice to obtain same results. It is not just about code performance, but also about the syntax of the functions or even simply about the code implementation. For statisticians working in teams who learned only R, I would recommend to use packages with documentation (e.g. roxygen2) and unit tests (e.g. testthat).
  • asked a question related to Coding
Question
1 answer
Hallo everybody,
I need to calculate RUMs (Rigid Unit Modes) for a simple inorganic structure of the hexagonal (P6/mmm) polymorph of WO3. In my opinion, there should be no RUMs there, but I need a proof. Can anyone help?
I DL'ded the code, but got a lot of errors from the online fortran compiler...
Relevant answer
Answer
I am doing some research about RUMs, but I cannot find CRUSH code. Could you help me with my study by share this code with me? I am very grateful for you,my email is [email protected].
  • asked a question related to Coding
Question
2 answers
Hi, I am cleaning and reorganizing some data and there are some codes that I removed all quotations from and still, I cannot remove them. The option to delete them is not available. Does anybody have a solution to recommend? Thank you.
Relevant answer
Answer
I was able to delete these codes later by dissociating them from groups and other links.
  • asked a question related to Coding
Question
4 answers
Hey guys,
There is a strange thing that is going on with my laptop.
I receive messages like:
ssh is not recognized as an internal or external command
but I know I have installed both SSH client and server.
I have tried a variety of methods to figure out why I cannot run these commands on my laptop.
Also, I found that the top side of my terminal (cmd) that introduces the terminal (I have attached a photo here so you can see what's going on) does not mention "system32".
Now, my goal is to be able to run my commands such as SSH.
What is your suggestion?
Thanks!
Maryam
Relevant answer
Answer
Hi Maryam
Are you sure you have SSH software installed?... If you execute just "SSH" did you get any answer from the system?
It looks you are accessing a Linux box from a Windoze computer. I would advice you to use a very popular software called MobaXterm, which will allow you to connect easily with Linux systems from Windoze, since it has an Xfree server and a SSH client included. Check it out at:
Best of luck. Let me know if it helps.
Paco
  • asked a question related to Coding
Question
2 answers
Hello,
I'm trying to create a genind object of an SNP dataset from Whelk to examine allelic richness between sites for some COI_16S sequences (combined sequences from the same individual).
My dataframe is in long format with individual SNPs coded as nucleotides(A, T, C, or G) and uninformative sites coded as "NA". There were two columns in the initial dataframe identifying specimens and populations that were selected out as a character vector "ind" and factor "popchar", respectively, and removed from the dataframe prior to converting. An additional character vector indicating loci names was also created, "locichar." The final dataframe contained the SNPs alone and was called "Whelk_SNP_file_dataonly".
Here is the code for the function df2genind():
df2genind(Whelk_SNP_file_dataonly, sep=" ", ploidy=1, ncode=1, ind.names=ind, loc.names=locichar, NA.char="NA", pop=popchar)
Each time I do this and look at the object using head(), I get this:
// 1 individual; 33 loci; 66 alleles; size: 15 Kb // Basic content @tab: 1 x 66 matrix of allele counts @loc.n.all: number of alleles per locus (range: 2-2) @loc.fac: locus factor for the 66 columns of @tab @all.names: list of allele names for each locus @ploidy: ploidy of each individual (range: 1-1) @type: codom @call: .local(x = x, i = i, j = j, drop = drop) // Optional content @pop: population of each individual (group size range: 1-1)
The file includes data from 242 individuals, not one. I cannot figure out why the function is reading my data as one individual. Does anyone have an idea of why adegenet is doing this and how I might fix it?
Relevant answer
Answer
Have you tried without giving the option "ind.names" so that they are taken from the row names of your data frame?
  • asked a question related to Coding
Question
6 answers
I am looking for a rigorous way to code and analyse visual data (e.g. photos of products) for insights. In specific from a phenomenological perspective.
Relevant answer
  • asked a question related to Coding
Question
5 answers
Currently working on a lunar rover for a university masters project, my colleagues have successfully developed a code that considers not only where the solar panels are situated in relation to each other due to the rover shape (1 on each side ) but also the long and lat for the position on the moon.
I reach out to people in the relevant field for a helping hand in developing this code as it fails to include the sun elevation, although it remains low when at the poles which is the likely landing site it will change the power output of the panels especially when terrain is also considered.
Advice/Tips/Suggestions fully welcome!!
Thank you.
  • asked a question related to Coding
Question
4 answers
I want to implement only a Gate similar to forget gate or output gate of LSTM cell. Mind it...!!!
I am not implementing the entire LSTM cell rather I have certain input and previous output and I want to pass them through a Gate with tanh or sigmoid as activation. Is there a way to do that in Keras or in Pytorch. Please suggest.
Relevant answer
Answer
Yes, It can be implemented with python in keras. Although i have not implemented it personally, but I can suggest something which might be useful. You can write a class in python for the GATE that you want to program. You can use normal numpy operations to write the function. Algorithm would look something like this:
Input: prev_output, current_input;
Output: activated_output;
preactivated_output=prev_output+current_input
activated_output=sigmoid(preactivated_output)
You might have to do vector embedding so that the size of the vectors are consistent. You can try this approach and good luck.
  • asked a question related to Coding
Question
5 answers
This is my code about pso-pf , code.m is main code. I referece <PSO Algorithm Particle Filters for Improving the Performance of Lane Detection and Tracking Systems in Difficult Roads>.
But I have try to combine them,but the result is so bad .if someone have free time,please help me.
Relevant answer
Answer
Dear;
Please check the attached files
THESE Particle Swarm Optimization and Particle Filter Applied to Object Tracking.pdf
Regards
  • asked a question related to Coding
Question
19 answers
When you have two experts collaborating on a problem, you get better quality data, in less time and grant money.  Yet, it astounds me the number of researchers who insist on writing code themselves.  
To address this,  I started a program that connects science researchers with volunteer software developers, my hypothesis is that facilitating these collaborations will result in better, more customizable software.  I'm looking for researchers who want to participate in the pilot of this program -- someone who has a coding project and is willing to collaborate with a software developer to get it built.  
If you're interested, could you contact me? Thank you
Heather Claxton, PhD
Relevant answer
Answer
Let's do it.
  • asked a question related to Coding
Question
11 answers
We undergone so much problems with NVivo (activation, compatibility PC vs MAC) and it is so much expensive.
Relevant answer
Answer
@Vanessa T Vaillancourt
Other major qualitative analysis include MaxQDa and RQDA.
There is also WeftQDA which is a free software. You will find it on Google.
  • asked a question related to Coding
Question
15 answers
The question is whether (A) an analog system, or (B) a digital system, reflects the reality we see.
The A solution can be represented by (1) derivatives, and (2) include the hypothesis of continuity. They both, (1) and (2), match each other; without continuity there is no derivative, and without derivative there is no continuity. They both were used by Isaac Newton in his theory of "fluxions" in calculus. A fluxion is "the term for derivative in Newton's calculus" [1].
The B solution cannot be represented by derivatives, nor include the hypothesis of continuity. Again, the absence of both conditions, (1) and (2), match each other. This fact (i.e., the absence of (1) and (2) is not important) remained hidden for centuries in the fake controversy of primacy of calculus that followed, and was flamed by Newton as president of the Royal Society, against Leibniz, in 1713.
But Galois, around 1830, rediscovered a problem standing for 350 years, determining a necessary and sufficient condition for a polynomial to be solved by radicals, allowing calculus to be done by finite integer fields or Galois fields, thus eliminating the need for continuity in calculus.
How? Usual calculus requires continuity for the existence of derivatives, based on the four operations of arithmetics. It does seem necessary to require continuity, as Cauchy did in analysis in the field of real numbers. However, in the field of finite integers, such as Galois fields, calculus can be defined exactly, not requiring continuity.
Continuity is therefore an artifact of the formulation, and should be avoided. This complies with quantum mechanics and the work of Leon Brillouin, in 1956. It is fictional to consider continuity in mathematics, physics, computer science, and code. We are led today to consider finite integer fields, such as Galois fields, in calculus. We eschew the considerations of so-called "real numbers," as they include irrationals, which cannot be counted. The sum of two numbers in a Galois field is always a number in a Galois field. The sum of two numbers in the real set is never an infinitesimal; they can never be created nor exist.
The conclusion is that digital signal processing is the reality, not analogue processing. There is no effective quantization in digital processing, the quantum nature simply asserts itself. And this changes how we should view calculus: continuity is not required if one uses Galois fields. What is your opinion?
Relevant answer
Answer
JP: But that does not seem to exclude the non-quantum case.
  • asked a question related to Coding
Question
16 answers
Hello
I've collected socio-economic data using ODK that I'm analyzing using R software. I have a multiple choice question with several modalities that I would like to have the frequency of each modality.
I tried to convert all modalities of the variable into boolean variables using this code:
qcm$q0_cacao <- ifelse (str_detect (qcm$farm_type, pattern = "cacao"), 1, 0)
qcm$q0_cafe <- ifelse (str_detect (qcm$farm_type, pattern = "cafe"), 1, 0)
qcm$q0_anacarde <- ifelse (str_detect (qcm$farm_type, pattern = "anacarde"), 1, 0)
qcm$q0_hevea <- ifelse (str_detect (qcm$farm_type, pattern = "hevea"), 1, 0)
....
....
I would like to calculate the frequency of each modality under R and have it in tabular form as follows: https://www.dropbox.com/s/jeofweytnhansn3/result.xlsx?dl=0
I know how to do it using Stata using this command "mrtab q0_cacao - q0_xxxx by (village_name) column" but I would like to do all my analysis using only R.
Relevant answer
  • asked a question related to Coding
Question
7 answers
Hi!
I graduated with a Neuroscience and Psychology BSc in 2016, and my final year project was a bench-lab in vitro study of omega-3 oils applied to fluorescently-stained cortical neurons. I realised that although I had a passion for neuroscience, my first foray into applied research was not promising; I hated the tedium of the process.
I then did an MSc in Mental Health Sciences, with my thesis originally using an EEG/MEG dataset on psychosis patients. Again, I loathed trying to learn (the tedium of) Matlab and learnt instead that I'm fundamentally not a coder.
I changed to studying the mid-term after-effects of a psychedelic drug using a battery of questionnaires. I loved this psychometric approach, and it was easy enough running simple statistical analyses. However this was firmly psychological, not neuroscience.
Now I'm doing a PhD largely employing thematic and other qualitative analyses of altered states, which I thoroughly enjoy. But again, I'm still attempting to retain some know-how in neuroscientific methods. I'm doing some basic secondary EEG analyses on the aperiodic signal - though not loving even the rudimentary coding necessary.
My question is what are some other neuroscience techniques available out there which I may be more suited to? Can one even be a 'neuroscientist' without having to use a wet-lab - or matlab?
Perhaps tDC or tMS? Could someone maybe elaborate on the types of questions that can be answered with these; or the analyses that are run after them?
Other than f/MRI, which I assume always require programming - do other neuroimaging modes e.g. PET/SPECT also require this?
I'd also be very interested to hear people's thoughts on/experience with working to incorporate as much understanding of neuroscientific findings in strictly psychology studies - While writing psychology papers, referring strongly to the neuro literature to inform and discuss the study's aims and findings?
Thank you very much,
I'd be really appreciative of any insight :)
Pascal
Relevant answer
  • asked a question related to Coding
Question
37 answers
We are working on a qualitative project that uses telephonic interviews. Most of the interviews have been transcribed so we want to start the coding process. We cannot afford Atlas.ti or NVivo, but many open source software programs either do not cater to multiple collaborators or are limiting. For example, we found Taguette really easy to use however it does not support nested/hierarchical coding or the visually pleasing multiple coloured highlighting.
Could you tell us about any other open source softwares that would overcome the above-mentioned shortcomings? We're open to using Google Docs and Sheets, however, to be honest we are still figuring out how to go about it without causing issues in the later stages.
Note: We're really new to qualitative data analysis
Relevant answer
Answer
QDA Miner Lite is free, and works well for basic qualitative data analysis.
  • asked a question related to Coding
Question
3 answers
The data coding process using NVIVO 12 software is relatively straightforward, but I'm having difficulty using the hierarchy charts to visualize the information in terms of demographics (e.g., age, gender, etc.). Does anyone have any tips?
Relevant answer
Answer
I generally use case classification sheets for the Demographic Statistics of my respondents. I know it is not the same as the charts you can implement in Nvivo but it does facilitate an easier management of your cases, especially in bigger research populations. You can use case classification in your analysis by using compound coding queries, but also in the representation of your population if you export your classification sheet and use the representation options of MS Office Excel. This way, you can combine both the benefits of Nvivo with the transparant representation of Excel.
  • asked a question related to Coding
Question
4 answers
It will be very helpfull  if one can provide me for example an excel sheet or like this showing how applying the NLDFT to calculate pore size distribution from experimental N2 adsorption isotherms ?
Regards,
Relevant answer
Following
  • asked a question related to Coding
Question
4 answers
This might be a long shot.
I am asking this question because I am currently writing my thesis proposal. It will be about the advantages and disadvantages of using VR in primary and/or secondary education.
Concerning the economic advantages, I am going to list the reduced costs as an advantage (i.e. geography trips).
However, I wanted to further elaborate on this and explain how education is of course benefiting from VR technologies but VR technologies are actually going to benefit from education too.
I believe I will need real life case studies of schools introducing students to VR development/graphic design to prove my point.
I am of course not referring to advanced courses.
I am looking for case studies from anywhere in the world. Curricular and extracurricular classes in schools.
I will then be able to prove that some schools are going to benefit the job market by training future VR specialists.
Many thanks in advance for your reply.
Relevant answer
Answer
Nice Dear Martin G. Debattista
  • asked a question related to Coding
Question
2 answers
Good evening!
I am trying to prove the existence of Asset-Based Poverty Traps (Carter & Barret, 2006) by using the Threshold Estimation developed by professor Bruce E. Hansen in his paper "Sample splitting and threshold estimation", Econometrica (2000). I have already downloaded the STATA program from professor Hansen's site: http://www.ssc.wisc.edu/~bhansen/progs/ecnmt_00.html. However, I have not been able to install both codes he uses in his programs: "thresholdtest" and "thresholdreg" (I tried findit "thresholdtest"/"thresholdreg" and I found no results)
In that sense, I would really appreciate if somebody helps me out with the source of installation of both codes in STATA.
Thanks in advance
Édgar 
Relevant answer
Answer
Details for implementing this can be found in Hansen's website. Find attached link.
  • asked a question related to Coding
Question
5 answers
I am doing research about neutron spectra measurement and interested in response function calculation. Then i want to know the theory in NRESP and NEFF. Please share your literature with me, Thank you !
Relevant answer
Answer
My email is [email protected].
Best wishes!
  • asked a question related to Coding
Question
6 answers
I want to apply nsga-iii algorithm on to a dataset for feature selection, could anyone here help me with this. i have code for nsga-ii in matlab and dataset in excel file(some attack data).
Relevant answer
Answer
Monika Roopak Did you solve this problem somehow? Because I am stuck in the exact same problem. How to apply the NSGA 2 algorithm on a dataset. Please guide me if you can. Thank You.
  • asked a question related to Coding
Question
2 answers
Hello everone; 
I am using the following code to excute Dinkelbach algorithm as in the paper attached here.  I have problem with the Initialization of P  values,  in the code it is initialized but the sum function does not consider these values.  Does anyone has an idea how to solve this problem? 
Thank you in advance. 
Relevant answer
Answer
I want Matlab code for dinkelbach algorithm to solve the cell formation problem...
If any one have the code please share the code.
  • asked a question related to Coding
Question
7 answers
Hello
Please can you help me to write this equation on Matlab 2015a !. I have tried many types of writing but I could not get any result.
Thank you for your help in advance.
Relevant answer
Answer
This is a MATLAB function which you can save and use more conveniently.
This implementation would be much faster than using a for loop.
Note that N should be large enough to give you sufficient precision.
function y = f(x,K,r)
N = 1e4;
i = 0:N;
y = 6/pi^2 * sum(exp(-i.^2*pi^2*K*x/r^2));
end
  • asked a question related to Coding
Question
30 answers
I am looking for Matlab code for Multiple imputation method for missing data analysis, any body can help me I will be grateful. 
Relevant answer
Answer
this link should be available for multiple imputation by MATLAB.
  • asked a question related to Coding
Question
3 answers
Dear all!
I am currently looking for code (preferably Matlab) to solve the time-dependent drift-diffusion equations for a mixed ionic-electronic conductor with a single ionic and single electronic species, e.g. the material is a hole-conducting semiconductor and has a single, single negatively charged, mobile ion. Contacts can be considered ion blocking and no chemical reaction should occur. 
I have found several paper outlining the solution for a steady state condition (e.g. Y. Gil et al., Solid State Ionics 179 (2008)); but I was unable to find something similar for the transient solution.
Unfortunately my programming skills have steadily degraded ever since I left college and I am looking for a more detailed outline to avoid the usual pitfalls ;)
If someone has recently come across something similar I am very grateful for any help!
Yours,
Chris
Relevant answer
Answer
Hi dear
you can set it.
best regards
  • asked a question related to Coding
Question
2 answers
I am not able to understand how the length of the string is compressed with this algorithm , for example if the input string is abcabcbfc then the output will be 
abc(0,3)b(3,1)bf(3,1) here the size of encoded string is not decreased. So am i missing out on something , Can someone explain this thing.
Relevant answer
Answer
you have to have what so called( search buffer and look ahead buffer) in order to solve it.
  • asked a question related to Coding
Question
4 answers
I am solving hard stiff ODEs and need an efficient ODE solver for these systems. A MATLAB code would be very much appreciated
Relevant answer
  • asked a question related to Coding
Question
3 answers
Software engineering Software Effort estimation
Relevant answer
Answer
There are several methods of software development effort estimation which are based on different size metrics such as Function Points, Object Points, Use Case Points. The methods based on these metrics use different environmental and technical factors which influence software development effort. I want to do research on Use Case Point  based software development effort estimation. So I need a dataset of industrial software projects in which the characteristics of software projects are given in terms of Use Case Points metrics.
  • asked a question related to Coding
Question
5 answers
I need to introduce Cr as a trap in the bulk area of my device which I am simulating using SILVACO TCAD. Can anyone help me to do this?
Relevant answer
Answer
I am not so sure what's my profile look like, I work with Silvaco Singapore. You are our customer and please feel free to contact us via official channel.
I visited IIT bombay last year.
Thanks for your interest and support in Silvaco TCAD. Really appreciate.
  • asked a question related to Coding
Question
2 answers
I'm trying to clone the respository for VCFtools into GitHub and configure it. These two scripts always run successfully in GitBash:
However, I cannot get the next two scripts to work:
./autogen.sh ./configure
Whenever I try the first script, I get the following error message:
matth@DESKTOP-FKTJ1JU MINGW64 ~/vcftools (master)
$ ./autogen.sh
./autogen.sh: line 3: autoreconf: command not found
All the files are in the GitHub folder on my computer (the GitHub folder is in Documents, in case that makes a difference). I need this program for my Masters thesis. Does anyone have any idea what's going on?
Please help if you can. Thanks in advance.
Relevant answer
Answer
Have look at conda installation : https://github.com/vcftools/vcftools/issues/96
  • asked a question related to Coding
Question
7 answers
I have attached a file of problem which I have to code in Matlab.
If delay is constant, then i can code it using dde23 but in the case when tau(t) = exp(t)/(1+exp(t)). We will have to use ddesd function to solve but how to code using ddesd .
Relevant answer
Rakesh Kumar But how did you model the time delay in MATLAB according to your problem?
  • asked a question related to Coding
Question
7 answers
I want to send data with a Visible light
communication (VLC), is there an approved method, or do I have to program anything ( file text , pdf multimedia mp4 mp3, file winrar .....) I want to send random data ("PRSB") is it useful for sending?
Relevant answer
Answer
Adding to the colleague Aparna,
VLC depends on using the power LEDs illuminating the rooms to send data for relatively short distance where the current of the LED is made on or off in to send the binary coded information in form of light pulses. The optical receivers detect the sent stream of light pulses regenerate them and send them to the destination.
There are also high spectral efficiency modulation techniques to increase the number bits per Hz.
You can build your system by your self. For more information helping you implementing the system and testing it please refer to the papers:
Best wishes
  • asked a question related to Coding
Question
4 answers
Hello
I am working on the following: I have 65 exsiting definitions about a concept. From these 65 definition I try to distill a representative definition via coding procedures. Thus, I code each definition to find patterns.
Now I have all definitions coded. And I can see how many times a code is used in the exsisting definitions. For example: code X is present in 20 of the 65 definitions.
As I now want to develop a representative definition (and not one which contain all codes) I would set a cut-off-level. A cut-off-level in the sense of: only codes which are more then e.g. 20 times present in the exsiting definitions will be included in my representative definition.
Now I was wondering, is there a way to determine the cut-off-level statistically?
Relevant answer
Answer
I might ask: do you believe in qualitative at all? I think that "determining cut-off level statistically" is about quantitative... You should find out what "representative" means in qualitative sense. In short "representative" can be defined in terms of conceptual variation found in the data. I think that 65 definitions should be reduced towards few information intense ones, kind of conceptual cores.
The article "Analysing qualitative data" (2000) by Catherine Pope, Sue Ziebland,Nicholas Mays could be an informative source. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1117368/
  • asked a question related to Coding
Question
4 answers
I am looking for Matlab code for EM algorithm for system identification models (i.e ARX or ARMAX)
Relevant answer
Answer
Matlab System Identification toolbox and Matlab Statistics and Machine learning toolbox.
  • asked a question related to Coding
Question
4 answers
Does anyone have experience with the feature "coding focus groups" in ATLAS.ti 8? I have data of focus groups to analyse (Grounded Theory) and I am wondering about what might be the advantages of this feature in the software. Is it about having direct links between quotes and speaker?
Relevant answer
Answer
Fortunately for you Meike, I would suggest you the same link
  • asked a question related to Coding
Question
4 answers
Does anyone have a matlab code  for solar PV cells?
Relevant answer
Dear Haidar,
welcome,
You can find a complete matlab model and its parameter extraction in our book chapter in Elsevier:
Best wishes
  • asked a question related to Coding
Question
2 answers
Hi,
For my master thesis I am exploring the effects of different types stereotype activation (i.e., stereotype reactance, stereotype threat and no stereotype activation) on negotiation performance. Based on the literature, I believe that the communication style (i.e., flirtatious and non-fliratious) of the confederate might moderate the effect of stereotype activation on negotiation.
What I have done so far is dummy coded both the predictor variable (no stereotype activation = 0) and the moderator variable (non-flliratious communication = 0). Additionally, I computed two new interaction term variables with dummy variable stereotype reactance * dummy variable flirtatious and dummy variable stereotyp threat * dummy variable flirtatious. But I am not sture what I have to do next. I've tried the regular Linear Regression test in SPSS, I placed both dummy variables of stereotype activation and the dummy variable of communication style in the first model. Then added the interaction term variables in the second model.
I've also tried the Hayes' process moderation test. I conducted two seperate test because it doesn't allow you to insert multiple IV's. But not sure if this is the right way to go.
Could anybody explain to me how to conduct a moderation analysis with both the predictor and moderator as categorical variables? Or am I on the right path? If so, what do I do next?
Thank you very much!
Nikki
Relevant answer
Answer
The same way you do with any interaction (moderation). that is,add a term a*b
Where I believe the a's and b's you have described in your question. Now simply run your regression and see if the coeffs of the two a*b type terms are significantly different from 0. This is a factorial type of model. The simplest description I know of is attached. You might also see this link for all the details:
Here you may download a pdf copy. Best, D. Booth
NB: remember interaction and moderation are the same thing
  • asked a question related to Coding
Question
4 answers
Although my code does not have error apparently, it doesn't offer proper solution. I have attached some parts of my code.I would be grateful if you could give me some tips on how to implement sos1 variables in gams.
Relevant answer
Answer
That's easy to be implemented!
Assume a typical mathematical program with equilibrium constraints as below:
Min f(x,y)
s.t.
g(x,y)>=0
y>=0
y * g(x,y) =0 (your complementarity condition i.e., 0<=y_|_g>=0)
-----------
To solve the problem above using SOS1 variables, you can reformulated it as follows:
Min f(x,y)
s.t.
g(x,y)>=0
y>=0
u = 0.5 * (y+g(x,y))
u - (vp + vn)=0
vp - vn = 0.5 * (y-g(x,y))
where, 'vp' and 'vn' are SOS1 variables that can be easily defined in GAMS as below.
----------
variables x , y , u;
SOS1 variables vp , vn;
----------
Good luck,
Morteza
  • asked a question related to Coding
Question
3 answers
In a Two bay portal frame should I consider these 2 cases : the live load at each bay alone or only the live load on the total roof? and where I can find the reference for the answer of this question? ( code )same question for wind and dead load.
Since If you check the typical two span portal frames, the middle column is too small with respect to the edge columns. Apparently they are considering only the case where the load is on the total roof. But why? is this part of code?
Relevant answer
Answer