Science topic
Computer - Science topic
Explore the latest questions and answers in Computer, and find Computer experts.
Questions related to Computer
I have fit a Multinomial Logistic Regression (NOMREG) model in SPSS, and have a table of parameter estimates. How are these parameter estimates used to compute the predicted probabilities for each category (including the reference value) of the dependent variable?
Hello to all,
in my lab we recently had an electrical fault and we lost the computer with the CXP software that was connected to our FC500 MPL flow cytometer. Since we are not able to find the CD with the software anymore, I asked Beckman, but they weren't able to give us a copy of the software because it is very old. The cytometer model is out of production and so it is the software apparently.
Does someone have a copy of the CD to send to me? That would be really appreciated!
Tensor computing and quantum computing are two distinct fields with different applications. Tensor networks, such as MPS, PEPS, TTNs, and MERA, have been successfully used in classical machine learning and quantum machine learning, where they can be mapped to quantum computers for improved performance. These tensor networks are efficient for preparing ground states on classical computers and can be combined with quantum processors for tasks like time evolution, which can be intractable on classical computers. On the other hand, quantum computers aim to outperform classical computers in various computational tasks by utilizing the principles of quantum mechanics. Here is a quick comparison between quantum computing and tensor computing:
Quantum Computing:
1- Based on principles of quantum mechanics - uses quantum bits (qubits) that can exist in a superposition of 0 and 1
2- Leverages quantum phenomena like entanglement and interference
3- Can solve certain problems exponentially faster than classical computers (Grover's algorithm, Shor's algorithm, etc)
4- Still in the early stages of development with small-scale quantum computers built
5- Potential applications in cryptography, machine learning, molecular modeling, etc.
Tensor Computing:
1- Based on multidimensional array data structures called tensors
2- Used extensively in deep learning and AI for parameters and dataset representations
3- Leverages tensors for efficient parallel data processing and manipulation
4- Scales well on classical hardware like GPUs through frameworks like TensorFlow
5- Already in use in many machine learning applications like computer vision, NLP, etc.
For more information and details, please see:
I am quite confused about what formula to use to compute my sample size. I will be conducting a Sequential Explanatory design wherein my QUANT phase will make use of mediation analysis and my qual phase will be interpretative phenomenology. How can I determine the sample size? What is the best formula to use?
For the investigation of the start-up of an alternating anaerobic-anoxic reactor, I would like compute the potential biomass decay for aerobic organisms (specifically autotrophic ammonia oxidizers (AOB, AOA)) in my inoculum. For this purpose, I am going to apply specific decay rates of AOB and AOA, but I am lacking knowledge about the autotrophic cell mass in my inoculum.
Can anybody provide average values or a ranges of the share of autotrophs in the MLVSS of typical conventional activated sludge systems?
Thank You already for advice!
Can't create job for the server: CASTEP.
This computer was unable to communicate with the computer providing the server.
Hey! I want to know the hazard ratio for male vs female using keplan-meier survival command, however so far in SPSS they can only compute survival plot and statistic significance without telling me the Hazard Ratio, is there any way to compute it? (Using log rank, it only provides significance)
Is it possible to compute the peak-to-background ratio for the case of a doublespeak, that is when two peaks are not resolved completely?
So far we have (Figure attached), tying to get the p/B ration for the second peak (peak 1)
1.) Counts under peak 0
2.) Counts under peak 1
3.) Total counts under the combined region
As I understand merely subtracting the 3.) -1.) -2.) will give background counts for both peaks, however only background under peak 1 is required.
Thank You
Hello i am trying to add my paper in computers and electronics journal but i cant seem to find their template anywhere in their system, and they mentioned that your paper your way, so if anyone has the journal template can share here please.
Thank You
Kindly assist with my challenge. I used stress/atom commands in LAMMPS to calculate Von miss stress and hydrostatic stress of a silicon nanometric cutting. Unfortunately or fortunately I got hydrostatic stress of maximum -3GPa and 1GPa which is low compared to 11GPa - 14GPa for Si and diamond tool with 204241 atoms. Note, the hydrostatic stress is for tool atoms group. I attached the stress part of the script. If there is a need to attach the entire script, I will.
Hello everyone, I am simulating the DEP on the particle, I have some problems now:
- How to calculate the dielectrophoretic force on the particle? To show the value of the dielectrophoretic force in the results.
- Should I simulate all the studies at one time or just compute the one which is related to result that I want.
Please help me solve the problems, thank you.
Hi everyone, my computer don't recognize the nanodrop and shows an error Code 10. Could be a driver error or a damage in hardware? Thanks for your help! :)
Hello,
I am quantifying bacteria exposed or not to a certain chemical at specific time points.
How can I compute the results in a relative manner? That is: what is the formula for a ∆∆Ct analysis of bacteria?
Relative quantification is normally given for a gene of interest over one or more reference genes by calculating the cycle difference between the exposed and not-exposed genes and then between GOI and REF.
For bacteria, would these be acceptable:
∆Ct = Ctt – Ct0
∆∆Ct = ∆Ct (exposed) – ∆Ct (control)
where t is a specific time (0,1,2,4,24 h post-exposure) and 0 is t=0. The fold increase would then be 2–∆∆Ct.
Is this correct?
Thank you
I’ve been trying to login multiple times but, I keep failing.
Hi everyone. I am actually very new to protein computational so I have zero idea how do I install AlphaFold 2 to be run on my computer.
I heard that if you install AlphaFold + all required dependencies from source code, it would be significantly faster to run compared when you using a (pre-built) container.
I don't really have any idea on how those two works. If anyone could help me with this, I would be really grateful.
Does any of you know how to do this?
details:
MIPS USB Cameras provide a quick and easy means of displaying and capturing high-quality video and images on any USB 2.0-equipped desktop or laptop computer running a supported Microsoft® OS.
please send me.
thanks
Karthick
I want to detect multivariate outliers in my dataset, which contains participant responses to various questionnaires such as DASS-21, PSWQ etc. Should I compute Mahalanobis distance using total scores for constructs such as depression, anxiety and worry or should I use item-level data from the questionnaires before aggregating them into their total scores? When using item-level data, 25 participants are detected as outliers among 370 participants, but when using the total scores, only 1 participant is detected as multivariate outlier.
For a study on anthropomorphic design of AI and trust, we are currently looking for established questionnaires regarding the perception of the role of computers in human-machine teams.
A primary example would be that the machine is perceived as "a teammate vs a tool".
Feel free to also answer with questionnaires on related constructs.
Thanks in advance and kind regards,
Martin
I would like to create visually appealing diagrams for my publications. For instance, to create graphical abstracts for articles, I want to know which computer tool can assist me in this regard.
Thank you in advance for your responses.
I think SPSS made my computer slow. Be careful when run in your computer.
Hello, dear community! ,
I am working on gamma-ray spectrometry data to delineate K-enrichment areas. I have created the maps using Geosoft, but I get errors when I use the K/eTh ratio (a map with few contours). I don't know what I am doing wrong.
Can anyone please help me?
Thank you
A human can only aspire to fluency in so many different languages before mixing up words due to code switching. Thus, MAYBE those who cannot learn so many languages turn to linguistics and coding to earn money.
how to model weir /barrage and import to open foam
which type of solver is best convenient for fluid flow
Suppose I compute a least squares regression with the growth rate of y against the growth rate of x and a constant. How do I recover the elasticity of the level of y against the level of x from the estimated coefficient?
Hi Everyone,
I want to know some of the best tools or platforms for planning research works (PhD or/and Master's research). Till now I have been using MS Excel to schedule my work, but it is not smart. So, I want to know if a tool or platform is there to use as a daily, weekly, or monthly planner. It can be free or paid; app-based (which means I have to download it on my computer and use it), web-based (which means it's only available online), or both; licensed or unlicensed; AI-based or without AI assistant.
I will give some options for our research community to comment in the chat below. The options are:-
1. MS-Excel
2. Google Calendar
3. MS-Word (making tables)
4. Desktop apps (with AI assistant)
5. Desktop apps (without AI assistant)
6. Web-based apps (with AI assistant)
7. Web-based apps (without AI assistant)
8. No digital planner, only personal diary
9. Other ways
If your option is other ways please mention.
Thank you.
Regards,
Kaustav Sengupta
#plannertool #tool #researchtool #researchplanner #dailyplanner #weeklyplanner #monthlyplanner #schedule #plannerapp #ai #aicommunity #plannerly #researchers #research #researchcommunity #researchstudy #researchmethods #researchassistant #elsevier #researchjourney
what if i worship my computer as Moses had opposed? has Abraham evolved?
or, are we relegated to a truth where "GOD" is Good = Savagery?
no. right?
so, the context becomes if we worship the knowledge gained through computers, then we are adding on AI, and semantically crossing the line between church and state, no?
this put the Law of Artificial Intelligence on it's head.
what comes of this though is a merger between Metadata, and FEAR itself.
no joe king....
#616 = 216 @666 #Quakes ^ARE *Cumming %HOW (Way: ISIsis)
Hi, I recently installed a small windows HPC cluster (1 head node and 4 compute nodes) in my lab and i want to run Materials Studio on it. I am wondering if anyone has the experience in running Materials studio on Windows HPC cluster? (I would appreciate a detailed answer)
Thanks!
Hello all, I want to compute the Energy drain with ns2. Can you help please?
If an activation function has a jump discontinuity, then in the training process, can we implement backpropagation to compute the derivatives and update the parameters?
Is there any resemblance in how "information flow" happens in humans (Molecular Biology/Neurology) and computers (Computer Networking)?
This question came to be after a note was made by a Computer Networking (CN) lecturer that no new methodologies for information flow in CN to be discovered.
I got interested to look for the answer in the Central Dogma of Molecular Biology, and still awaits further insights.
The future of blockchain-based internet solutions
Blockchain is defined as a decentralized and distributed database in the open source model in a peer-to-peer internet network without central computers and without a centralized data storage space, used to record individual transactions, payments or journal entries encoded using cryptographic algorithms.
In current applications, blockchain is usually a decentralized and dispersed register of financial transactions. It is also a decentralized transaction platform in a distributed network infrastructure. In this formula, blockchain is currently implemented into financial institutions.
Some banks are already trying to use blockchain in their operations. if they did not do it, other economic entities, including fintechs, implementing blockchain could become more competitive in this respect. However, cryptocurrencies and a secure record of transactions are not the only blockchain applications. Various potential blockchain applications are being considered in the future.
Perhaps these new, different applications already exist in specific companies, corporations, public institutions or research centers in individual countries. In view of the above, the current question is: In what applications, besides cryptocurrency, blockchain in your company, organization, country, etc.?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Hello everyone
I need to download an older version of Optisystem (Optisystem 13), but I can't seem to find a download link for this version. I had this version on my computer, but it crashed. Is there any way to download it again?
Things are likely to become yet more complex as the use of artificial intelligence by artists becomes more widespread, and as machines get better at producing creative works, further blurring the distinction between artwork that is made by a human and that made by a computer. So, here the question arises, Whether computers should be given status?
Hi! DEA newbie here. How do I compute the technical efficiency of DMUs in Microsoft Excel if I have 2 or more outputs? I'm going for an output-oriented DEA with VRS assumptions, as suggested. Thank you very much.
i want journal scopus Q3 that not take too much time to give me the forst decision
What is the best Programming Language for a ninth grader that has no previous experiences in programming?
Do you know of any of this software that is compatible with Apple computers? I tried GelAnalyzer4, PyElph, and the free online gel analyzer, but it is not working anymore.
What would be the minimum amount of protein needed to detect heme by TMBZ (3,3,5,5′-tetramethylbenzidine)? I am using cytochrome C from equine heart (positive control), chitinase (negative control), and Z-ISO (unknown). I want to do a quick sporting on filter paper of the presence of heme.
I would greatly appreciate any suggestions.
Computer desks must conduct electricity to the ground and be grounded, in order for the user not to endure an electric shock; the floor must also be grounded, and this applies to other furniture holding electrical appliances.
Hi,
can someone pointing to a paper/reference that describe how to compute d-prime for data in a 2AFC/2IFC design?
Thank you
Luigi
Does anyone want to collaborate?
I am looking for some people to collaborate on research with me. I focus on computer science-related topics (information security management or business informatics).
Hi,
I have a question regarding the derivation in the attached paper.
How did the author get eq. 27 by solving the dp/dx integrals?
When I computed integrals for dx/h^4 and dx/h^2 with limits from 0 to lambda, my resulting answer is 0 when I substitute the limits.
I am using
to calculate integrals
h = 1 + phi*(sin(2*pi*(x/lambda)))
Could anyone please help
I really appreciate any help you can provide.
regards,
Shannon
Our research is quasi-experimental. There are two groups to be tested under different teaching approaches however we don't know how many participants should be in a group.
عرف المصطلحات الأتية؟
الفيروسات
المعلومات
الحاسب الآلي
Test the Hypothesis: "DIGITAL DENKEN" based on basic software "PLAI" (polytrope linguistic artificial intelligence) complies with EU-recommendations and EU-rules for multiple AI-softwares within one system.
We will make the software system available for examination. PLAI runs on a commercially available computer. DIGITAL DENKEN includes human workshops combined with software usage down to laptops. PLAI offers at least 12 different usage-approaches. Examine now one of them as a first step.
"Which topics do you recommend for computer engineering with a focus on cyber security and deep learning, or any other hot topics suitable for a PhD degree in computer and communication engineering? I am in the early stages of my research and would appreciate any suggestions, including relevant papers. Additionally, I am seeking a co-supervisor to guide me throughout my research."
I am trying to gather information about Green's functions for the steady Euler and Navier-Stokes equations, which are the linearized response to point source perturbations. The ultimate goal is to compute the force that these singular solutions exert on solid boundaries. This is a text-book classic exercise in the case of potential flow (i.e., force exerted by potential point sources or vortices on a circular cylinder, for example), and I would like to learn more about the analogous situation in compressible, inviscid flow (subcritical flow past an airfoil, for example) or incompressible, viscous flow (flow past a flat plate, for example).
Hyperspectral Imaging, Hyperspectral Classification, Statistical Test
Hi all the eminent research community, wanted to do research on companies' sustainable reporting framework and efficiency, and for this, I need to calculate sustainable reporting scores. Can you please guide me on how to compute the same?
Hello,
I am trying to decide a cut-off value (probably equally to "change-point") in an ELISA assay, using R package. To make our assay convinient, I do not set either negatve or positive control in every plate.
A reference paper (doi: 10.1590/0074-02760160119) indicates a package saying "the Pruned Exact Linear Time (PELT) algorithm was selected (Killick et al. 2012) with the CUSUM method as detection option (Page 1954). The PELT algorithm can therefore rapidly detect various change-points in a series. The CUSUM method
is based on cumulative sums and operates as follow: The absorbance values x are ordered in ascending values (x1,…xn) and sums (S) are computed sequentially as S0 = 0, Si+1 = max (0, Si + xi - Li), where Li is the likelihood function. When the value of S exceeds a threshold, a change-point has been detected."
I am not sure how to create such a code and could not find a package on the Internat. If someone knows or experienced the decision of "change-point" using R, please tell me hot to do that.
Thank you.
Hello,
I have performed MM-PBSA calculation of a protein-ligand complex. I utilised approx 750 frames (out of a total of 15000 frames) to compute the free enrgy change of binding. Then, I used a python code to compute ACF of the total delta G binding. But, the obtained ACF plot is not showing exponential decay feature exactly. I am not able to figure it out. I am attaching my plot here.
Any suggestions would be highly appreciated.
I am using ELK-FPLAPW code to compute band structure of some materials. From the output files, I can plot l (or m_l) resolved band structure using Origin-Pro. But since I have the license of this product for a limited period of time I want to use xmgrace, which is free, to plot the same.
Has anyone done it? If yes then could you guide me or point me toward appropriate resources.
I modelling seepage analysis using SEEP/w and the result is XY gradient contour out of the phreatic line? And i think it's not make sense, so how i compute SF for boiling if the exit gradient computed by SEEP/w is wrong?
Every answer would be appreciated. Thanks before.
Hello everyone
I need to compute in MATLAB both the Lyapunov exponents for 2D discrete-chaotic Maps
For example the 2D Logistic map defined as:
x(n+1)=r*(3*y(n)+1)*x(n)*(1-x(n));
y(n+1)=r*(3*x(n)+1)*y(n)*(1-y(n));
where, r = 0.4: 0.01: 1.2
I am comfortable with computing the LE for any 1D discrete chaotic map like 1D logistic map with that differentiation method. But, I am confused for its 2D version. Is there any method of computing its two LEs using its generated two time series ( for x-series and y-series)? or How that differentiation method be extended for this 2D version?
Please help and provide the sample Matlab code (for any discrete 2D chaotic map) if anyone is having it. I shall be highly thankful for the kind help and guidance.
Best Regards,
E. Mehallel
Ambient Intelligence vs Internet of Things? What is Similarities and Differences?
The intersection of neuroscience, electronics, and AI has sparked a profound debate questioning whether humanity can be considered a form of technology itself. This discourse revolves around the comparison of the human chemical-electric nodes—neurons, with the nodes of a computer, and the potential implications of transplanting human consciousness into machines.
Neurons, as the elemental building blocks of the human brain, operate through the transmission of electrochemical signals, forming a complex network that underpins cognitive functions, emotions, and consciousness. In contrast, computer nodes are physical components designed to process and transmit data through electrical signals, governed by programmed algorithms.
The notion of transferring the human mind into a machine delves into the essence of human identity and the philosophical nuances of consciousness. While it may be feasible to replicate certain cognitive functions within a machine by mimicking neural networks, there are profound ethical and philosophical implications at stake.
Critics argue that even if a machine were to replicate the intricacies of the human brain, it would lack essential human qualities such as emotions, subjective experiences, and moral reasoning, thus failing to encapsulate the essence of human consciousness. Furthermore, the concept of integrating the human mind with machines raises complex questions about the nature of identity and self-awareness. If the entirety of a human mind were to be transplanted into a machine, the resulting entity may no longer fit the traditional definition of human, but rather a hybrid of human cognition and artificial intelligence.
On the other hand, proponents of merging human minds with machines foresee the potential for significant advancements in AI and neuroscience, suggesting that through advanced brain-computer interfaces, it might be possible to enhance human cognition and expand the capabilities of the human mind, blurring the boundaries between organic and artificial intelligence.
As the realms of electronics and AI continue to evolve, the question of whether humanity itself can be perceived as a form of technology remains a deeply contemplative issue. It is imperative that as these technological frontiers advance, ethical considerations and respect for human values are prioritized, ensuring that any progression in this field aligns with the preservation of human dignity and integrity.
The advancement of technology and the intricacies involved in simulating human cognitive processes suggest that it might be plausible for machines to exhibit emotions akin to humans. As the complexity of AI systems increases, managing a vast number of nodes and intricate algorithms could potentially lead to unexpected and seemingly irrational behaviors, which might even resemble emotional responses.
Similarly to how a basic machine operates in a predictable and precise manner devoid of human characteristics, the proliferation of complexity in a machine's structure could lead to the emergence of seemingly irrational or emotional behaviors. Managing the intricate interplay between a multitude of nodes might result in the manifestation of behaviors that mimic emotions, despite the absence of genuine human experience.
These behaviors could be centered around learned and preprogrammed principles, allowing the machine to respond in a manner that mirrors human emotions.
Moreover, the ability to simulate emotions in machines has gained traction due to the growing understanding of the role of neural networks and the intricate interplay of various computational elements within AI systems. As AI models become more sophisticated, they could feasibly process information in a way that mirrors the human emotional experience, albeit based on programmed responses rather than genuine feelings.
While the debate about whether machines can truly experience emotions similar to humans remains unsettled, the increasingly complex and interconnected nature of AI systems hints at the potential for machines to display a form of emotive behavior as they grapple with the challenges of managing a multitude of nodes and algorithms.
This perspective challenges the conventional notion that emotions are exclusively tied to human consciousness and suggests that with the advancement of technology, machines might exhibit behaviors that closely resemble human emotions, albeit within the confines of programmed and learned parameters.
In the foreseeable future, it is conceivable that machines will surpass the human mind in terms of node count, compactness, and complexity, operating with heightened efficiency. As this technological advancement unfolds, it is plausible that profound questions may arise regarding whether the frequencies generated by the human brain are inferior to those generated by machines.
Dear Mathematicians, Control Engineers, and Optimization Enthusiastics, Could you please compute the domain through which this inequality holds (3/(x-2))<-1? If yes, please prove your answer.
(Note that: the solution is not x> -1, and please don’t use any symbolic math software)
I have a large sparse matrix A which is column rank-defficient. Typical size of A is over 100000x100000. In my computation, I need the matrix W whose columns span the null space of A. But I do not know how to fastly compute all the columns of W.
If A is small-scale, I know there are several numerical methods based on matrix factorization, such as LU, QR, SVD. But for large-scale matrices, I can not find an efficient iterative method to do this.
Could you please give me some help?
If you use a likert scale with range "never" as 0, 1 is "one to two times", 2 is "sometimes", 3 "often" and 4 "most of the time". How to correctly compute the range?
Is the lowest value counted? Is it 4-1, for lowest value, and then 3/4 = 0,75?
Will the range then be 0 - 0,75 and 0,76 - 1,5 and so fort? Or does this need to be done differently?
Regards
Johan
Our CD is broken and we need to install the program on a new computer. Thank you in advance.
Supplemenary data only refers to my computer which will work hen I dies. Why cannot I store them at ResearchGate?
Enthalpy of mixing between two elements is a key factor in determining how easily an alloy can be formed. Stronger mixing enthalpies make alloy formation simpler. Hence, Is there any way to compute it then? Can any one suggest detail procedure for it?
time, lack of computer, laptop, desk top etc..
Dear all, I am trying to compute the Modified Weak Galerkin method for the Poisson Problem mentioned in the paper:
[1] A modified weak Galerkin finite element method. X. Wang, N.S. Malluwawadu, F. Gao, T.C. McMillan.
I am using FreeFEM++, but there are difficulties in applying the algorithm (3) mentioned in the paper above, where the jump function, the average function, and the weak gradient are not used or defined by anyone before in this program.
My question is, which software program should I use to compute this problem?
Dear all,
I am trying to compute the Modified Weak Galerkin method for the Poisson Problem mentioned in the paper:
[1] A modified weak Galerkin finite element method. X. Wang, N.S. Malluwawadu, F. Gao, T.C. McMillan.
I am using FreeFEM++, but there are difficulties in applying the algorithm (3) mentioned in the paper above, where the jump function, the average function, and the weak gradient are not used or defined by anyone before in this program.
My question is, which software program should I use to compute this problem?
Hi, I was going through this article by Tarsicio Beléndez, Cristian Neipp, and Augusto Beléndez.
I am wondering about the case where the load is at any intermediate point along the beam and not at the endpoint. That would be a universal case. How to proceed with that?
Expression (4) changes to:
M = P(a - ux - x), a is the point of vertical load.
But thereafter, it has no effect because a is a constant and gets sorted in the differentiation process, and all other equations remain as it is.
If we calculate the end-slope angle Phi0 for load at 'a' instead of L, can we use equation (9) and compute Phi0?
off course from a small angle deflection, Phi0 should remain the same for a<x<=L (My assumption from small angle theory).
Reference:
- DOI: 10.1088/0143-0807/23/3/317
Additional reference: (Equation 11 instead of 9 in this article)
- DOI: 10.1007/s40030-018-0342-3
What are the three layers of embedded system architecture and what are the key differences between IoT devices and computers?
Greetings,
For one of my cognitive sciences lab session, I have to install JupyterLab on my computer (a MacBook Air). However I really am lost with how to do it, especially since I'm not tech savvy and know next tp nothing in Python. I tried checking out the JupyterLab website but I cannot figure out how to install it : https://jupyter.org.
Can someone help me with this? Thank you very much in advance!
I am trying to run a protein-ligand molecular dynamic simulation for 100 nanoseconds. I would like to know if it could be run on an 8 GB RAM laptop. Do we need superior computers?
Is there any list or work or survey on current difficult or hard Computer Vison problems which are yet to solve efficiently?
Dear researchers, I intend to generate a single composite index variable using an entropy weight method. How can the single index be computed in Stata to generate the using the difference components variables.
I have tried to find the discriminant for complex-valued harmonic polynomial by using the following options:
1. Real part of the equation=Imaginary part of the equation=Jacobian with respect to equation=0
2. Using singular method.
The problem is these two methods are too slow to compute the discriminant.
My question is can I have another option to compute the discriminant?
Hi, I am doing a thesis on the above topic, can you please recommend information and materials I can use to compute the investigation. Thanks in advance
I would like to acquire EMG, torque/angle (from isokinetic dynamometer), and stimulation data via Matlab. I have a chassis system that is connected to my computer with Matlab. When the stimulation is given, I would like it to trigger the system such that Matlab records and shows me the next 0.3s of data (i.e., to visualize M-wave).
Does anyone have a Matlab script they could share with me?
Hello everyone!
I want to draw a huge flowchart for my computer code. Inside the flowchart, I want to write equations and notes.
Is there any specific software for that?
Hi all,
I want to conduct a correlation analysis, to check whether changes in learning outcomes correlate with the number of lessons students attend. Learner outcomes are measured at baseline and endline, and the measurement is ordinal. To be specific, students get assessed and assigned to one of 5 learning categories (for example for literacy, this would be Beginner, Letter, Word, Sentence, or Paragraph). The number of lessons students attended is measured quantitatively.
Since we are interested in the changes in learning outcomes, I was initially planning to calculate a difference score between endline and baseline, and correlate that with the number of lessons attended. However, having discovered that learner outcomes are measured ordinally, this does not make any sense. What would be the best way to compute a correlation between changes in learner outcomes (between baseline and endline) and the number of lessons students attend?
Thank you in advance for your responses!
Best,
Sharon
Hi, I am trying to perform a radiometric calibration of an Aster image on ENVI, I watched a youtube tutorial where they use the Radiometric calibration tool, but when I tried to do it on my computer the Radiometric Calibration tool is not displayed, does anyone has an idea of why, I already restarted either the program and the computer but it seems it's not helping.
How to I transfer an on-going discussion about an issue from my iPhone to my Windows desktop?
I am trying to calculate the spontaneous polarization of a semiconducting magnetic 2d material with VASP. The Material has a band gap of 0.28 eV (indirect).
VASP warns that
"The calculation of the macroscopic polarization by means of the Berry-phase expressions (LCALCPOL=.TRUE.) requires your system to be insulating. This does not seem to be the case."
Even though there is a finite band gap (with PBE). Adding to this, since all the calculations are 0K calculations, this semiconductor is effectively insulating for the purpose of this calculation. What is going wrong here?
I want to compute the critical load of the Euler-Bernoulli Beam equation by applying axial load. I am using the finite element method for discretization and the eigenvalue method to compute critical load. You can see more detail in the attachment. But I did not get an accurate value compared to the analytical value. If anybody has an idea about that please tell me. I will be very thankful.
Best,
Rauf.
I have looked at data base management and applications, data-sets and their use in different contexts. I have looked at digital in general, and I have noticed that there seems to be a single split:
-binary computers, performing number crunching (basically), and behind this you find the Machine Learning, ML, DL, RL, etc at the root fo the current AI
-quantum computing, still with numbers as key objects, with added probability distributions, randomisation, etc. This deviates from deterministic binary computing but only to a certain extent.
Then, WHAT ABOUT computing "DIRECTLY ON SETS", instead of "speaking of sets" and actually only "extracting vectors of numbers from them"? We can program and operate with non-numerical objects, old languages like LISP and LELISP, where the basic objects are lists of characters of any length and shape have done just that decades ago.
So, to every desktop user of spreadsheets (the degree-zero of data-set analytics) I am saying: you work with matrices, the mathematical name of tables of numbers, you know about data-sets, and about analytics. Why would not YOU put the two together: sets are flexible. Sets are sometimes are incorrectly named "bags" because it sounds fashionable (but bags have holes, they may be of plastic, not reusable, sets are more sustainable, math is clean -joking). It's cool to speak of "bags of words", I don't do that. Sets, why? Sets handle heterogeineity, and they can be formed with anything you need them to contain, in the same way a vehicle can carry people, dogs, potatoes, water, diamonds, paper, sand, computers. Matrices? Matrices nicely "vector-multiply", and are efficient in any area of work, from engineering to accounting to any science or humanities domain. They can be simplified in many cases (eigenvector, eigenvalue, along some geometric directions operations get simple, sometimes the change of reference vectors gives a diagonal matrix with zeros everywhere except on the diagonal, by a simple change of coordinates (geometric transformation).
HOW DO WE DO THAT IN PRACTICE? Compute on SETS NOT ON NUMBERS? One can imagine the huge efficiencies gained in some domains, potentially (new: yet to be explored, maybe BY YOU? IN YOUR AREA). Here is the math, simple, it combines knowledge of 11 years old (basic set theory) and knowledge of 15 years old (basic matrix theory). SEE FOR YOURSELF ,and please POST YOUR VIEW on where and how to apply...
Dear Wojcieh Salabun and coauthors
I read your paper
How Do the Criteria Affect Sustainable Supplier Evaluation? - A Case Study Using Multi-Criteria Decision Analysis Methods in a Fuzzy Environment
And these are my comments:
1- Unfortunatelly, very scant information on the scenario is given. The paper only informs that it is related to batteries, give the number of suppliers and say that there are 15 criteria, but don’t say which are they, and give the labels of five sectors or clusters. It only informs that refers to batteries suppliers for swapping stations. And what are swapping stations? As you understand, the reader does not have the obligation to know about them.
2- On page 10 you say: “The resulting ranking was considered as a reference point in the study, with the ranking order of the options presented in Table 1”. I understand that there is a mistake here, since it should be `Table 3’.
3. In section 4,1. You say “A sensitivity analysis also provides more comprehensive knowledge and a greater view of the overall problem, showing the decision-makers what might change in the results under changing external conditions”
This is in my opinion, an excellent contribution to sensitivity analysis (SA) definition and properties. Its merit is that in most cases it is the reason by which a DM can reverse the best alternative and select the second one. Unfortunatelly, this is very rarely mentioned in SA analysis, which relies only on the existent criteria.
However. I disagree with your sentence “Therefore, the researchers adopted an approach in which the weight of each criterion was modified at +/- 20% from the initial value”
I disagree, because you can’t assume that all intervening criteria may change +/- 20%. This is not realistic, since each intervening or binding criterion, that is, that conforms a certain alternative, has its own allowed range of variation and even can have none. In addition, it appears that all criteria are binding, which is very seldom the case, and thus, it could be that only six or seven criteria, out of the 15, are relevant.
4- The paper considers excluding criteria to compute the ranking. I think that this procedure may be incorrect. One thing is to consider irrelevant criteria, determined after a solution is reached, and another, not considering them from the beginning, and solving a problem that ignores criteria, which although irrelevant according to entropy, are needed. Entropy, or better, its complement, information from entropy, tell us which are the most important criteria to evaluate alternatives, but they don’t indicate that some of the others can be eliminated because have a high entropy. They contain information, perhaps very little, but information that it is not advisable to ignore.
Strangely enough, you determine that the most important criterion is C2 or Transportation cost, and you decide to eliminate it?
5- I am curious on something. I guess that batteries are for car factories or for retail. You consider transportation distances, that oscillate between 1055 km between Beijing and Wuhan, to 167 km between Beijing and Baoding, and 926 km between Wuhan and Baoding, but in my opinion, this means nothing if you don’t specify the destination of the batteries that can be manufactured in each of the three places. Obviously, if they are to be used in the same city where they are manufactured, the transportation cost is not very important, but it could extremely important if the distance between origin and destination is high.
You say: “𝐶2 (0.750, 0.857, 1.000) (0.167, 0.200, 0.250) (0.500, 0.600, 0.750) (0.750, 0.857, 1.000)”
This illustrates that from transportation you have three values for each option. But what do they mean?
Of course, it is understood that they indicate in TFN the smallest likable value, the largest possible value and the most probable. But of what? In the case of transportation cost what the refer to? Cost of unit/km?, and in case of distance? As far as I know distances are fixed, unless there are several routes between two places. If they are, why do you need to use TFN?
If there is a manufacturing site, it can be understood in the sense that there a minimum cost of transportation for say train, an upper for say truck, and middle say considering another route, but 0.075 may be a percentage of what? The values between options are quite similar, suggesting that distance is not an issue, what is it then?
6- You assume that only 8 criteria are to be considered, out of 15, i.e., those are the criteria that participate in the selection of the ranking, which for me is a very correct, and unfortunately very seldom done in papers published, where it is assumed that all criteria participate. Now, how did you select the 8 criteria? On what basis?
However, in my opinion, the quantity of criteria that are relevant, in your case, 8, can’t be considered that apply to all alternatives. My research in many cases, constantly shows that the number and the type of criteria is particular for each alternative, and thus, it does not apply to them all in the same set of criteria, although, in general, there are always some criteria that repeat in each alternative, naturally, on the same problem. That is, for A1 they may be C9, C2 and C7. For A2, C2, C5, C7 C10, C15 and C4. For A3, there could be C7, C9, C1, etc., therefore, I don’t think that you van speak of the same set of criteria for all alternatives.
You say “The other criteria did not affect the ranking order of the options”
I agree, provided that in SA you refer to evaluating ONE alternative of the ranking, not all of them jointly.
7- In reality you don’t need to use fuzzy. If you use triangular numbers, which of course is correct, you can use for each type of criterion, two criteria, one for minimizing the lower value (no less than), and another for maximizing the upper value (no more than). In this way, the final value will be computed by the software, and most important, considering their interaction with all other criteria that are using the same resource, for instance money. As un example, if you have a minimum and maximum value for storage, the software will find the INTERMEDIATE value that also satisfies another criterion, for instance, funds to fabricate a product according to demand. Naturally, I am referring to Mathematical Programming.
I hope that these comments are considered useful.
Best regards.
Nolberto Munier
Hi everyone,
Does anyone know how to compute transition dipole moment (TDM) between vibronic state using Molpro, Molcas, or Gaussian?
An example is to compute TDM between
v=1 at the ground state and
v=4 at the 1st excited state
Thanks in advance for sharing your expertise
I ran some Gaussian calculations and transferred the .out file onto a second computer with gaussview installed (dont ask why it is not installed on the first PC).
This was perfectly fine until a week ago, this is when i started to always get the following error message.
"Reading file D://path/XXX.out
SCUtil_ConnectionGauss:.preprocessfile_ScanGeom()
Bad atomic symbol/atomic number:t
Line number: 17"
This never happend befor with any calculation, but now i cannot open any .out file, not even those that previously would open without a problem.
As I have not performed any update of Guassian or Gaussview, I already tried reinstalling Gaussian and Gaussview. Ultimately, the first PC now also has Gaussview, but opening .out files on the second one is still not working.
Any ideas?
Hi everyone,
We are planning to apply for a grant to develop and implement an AI curriculum for upper elementary students. As part of our proposal, we need to identify a validated research instrument that we can use to assess elementary students’ AI content knowledge.
We are looking for a test that is aligned with the 5 AI4K12 Big Ideas and is appropriate for elementary students. The 5 AI4K12 Big Ideas are:
Perception: Computers perceive the world using sensors.
Representation & Reasoning: Computers represent knowledge and use it to reason about the world.
Learning: Computers can learn from data.
Natural Interaction: Computers can interact with humans in natural ways.
Societal Impact: AI has the potential to have a significant impact on society.
If you know of any tests that meet these criteria or any other suggestions, please let me know.
Thank you all for your help!
Best,
Kaya
Experiment No. 7: CONFIGURING VLANS on MULTIPLE SWITCHES Experiment No. 8: Connecting HUB with Switch Experiment No. 9: CONFIGURING SATIC ROUTING Experiment No. 10: CONFIGURING DYNAMIC ROUTING USING RIP Experiment No. 11: CONFIGURING DYNAMIC ROUTING USING OSPF.
Experiment Findings Computer Communication and Networks (Lab Reports)
Hello Everyone, I have got data that is not normally distributed and I need to run the Mann-Whitney U test instead of the Independent Samples T-Test, as well as the Kruskal-Wallis Test instead of * ANOVA. The problem is my data consist of five-items Likert scales ( I have several items that test a particular aspect of the study, they are organized in terms of scales and every scale consists of a number of items which are all Five Point Liker-scales, and the Cronbach Alpha is fine). My questions is, do I compute these items based on the mean to create one variable? Or do I need to compute them based on something else (Sum, median...) because the non-parametric tests use rankings? I do hope you would be so kind as to help me. Thank you.
What are the factors affecting the measurement of electrolytic conductivity using DC and AC techniques? What are the challenges and precautions related to electrolytic conductivity measurement? How to compute the cell constant?
for example, if i communicate using works having word-number code in order to manipulate computers in the future using known AI techniques, should this be considered "illegal" when intent has also been described transparently in advance of any proposed future change? #616 was here @DNA-Modifications (It's in the AI.RE)
Hey guys,
I have found two multi-item scales in my previous research regarding my master thesis. I want to know if I can compute an EFA for the dependent and for the independent variable?
I need a step-by-step procedure on how to perform a single parameter sensitivity analysis to evaluate the impact of parameters on a vulnerability index. I am particularly confused about how to create the sub-areas in GIS and compute the parameter rates and weights.
Hello,
Can anyone suggest me on storing state variable values in an array at different time steps? I'm trying to make computations at TIME(1) = 0.1, 0.6 and 0.9. I am able to compute state variables but I need to make calculations at the end of TIME(1)= 0.9. When I want to call values at T(0.1), T(0.6) and T(0.9), I'm unable to do it. These values are stored in ODB but I would like to automate the process so the parameters are updated for the next steps.
Thanks in advance
Trying to compute polychoric coefficient for 10-12 pairs of ordinal variables. Looking for suggestions on open source software to do the computation
Thanks, Sarosh