Science topic

Computer - Science topic

Explore the latest questions and answers in Computer, and find Computer experts.
Questions related to Computer
  • asked a question related to Computer
Question
2 answers
I have fit a Multinomial Logistic Regression (NOMREG) model in SPSS, and have a table of parameter estimates. How are these parameter estimates used to compute the predicted probabilities for each category (including the reference value) of the dependent variable?
Relevant answer
Answer
Thank you for your quick response and useful information. Can you share some resources about the interpretation and presentation of the results using predicted probabilities?
  • asked a question related to Computer
Question
1 answer
Hello to all,
in my lab we recently had an electrical fault and we lost the computer with the CXP software that was connected to our FC500 MPL flow cytometer. Since we are not able to find the CD with the software anymore, I asked Beckman, but they weren't able to give us a copy of the software because it is very old. The cytometer model is out of production and so it is the software apparently.
Does someone have a copy of the CD to send to me? That would be really appreciated!
Relevant answer
Answer
The Cytometer includes: ■ Analyzer that counts, measures and computes parameters. ■ Power Supply that provides voltages, vacuum and pressure. ■ MCL that positions the sample tubes in the carousel for sampling. ■ Laser(s) for fluorescence excitation.
Regards,
Shafagat
  • asked a question related to Computer
Question
3 answers
Tensor computing and quantum computing are two distinct fields with different applications. Tensor networks, such as MPS, PEPS, TTNs, and MERA, have been successfully used in classical machine learning and quantum machine learning, where they can be mapped to quantum computers for improved performance. These tensor networks are efficient for preparing ground states on classical computers and can be combined with quantum processors for tasks like time evolution, which can be intractable on classical computers. On the other hand, quantum computers aim to outperform classical computers in various computational tasks by utilizing the principles of quantum mechanics. Here is a quick comparison between quantum computing and tensor computing:
Quantum Computing:
1- Based on principles of quantum mechanics - uses quantum bits (qubits) that can exist in a superposition of 0 and 1
2- Leverages quantum phenomena like entanglement and interference
3- Can solve certain problems exponentially faster than classical computers (Grover's algorithm, Shor's algorithm, etc)
4- Still in the early stages of development with small-scale quantum computers built
5- Potential applications in cryptography, machine learning, molecular modeling, etc.
Tensor Computing:
1- Based on multidimensional array data structures called tensors
2- Used extensively in deep learning and AI for parameters and dataset representations
3- Leverages tensors for efficient parallel data processing and manipulation
4- Scales well on classical hardware like GPUs through frameworks like TensorFlow
5- Already in use in many machine learning applications like computer vision, NLP, etc.
For more information and details, please see:
Relevant answer
Answer
Quantum Computing by Mathematics: achieving tomorrow's technology with today's tools. In continuation of the previous posts, despite the advances to achieve the power of quantum computing at temperatures such as 1 Kelvin and efforts to use photonic and laser styles to achieve this power at room temperature, it is still far from reaching the level of using quantum computers in consumers' homes with reasonable costs. As mentioned, computing styles based on artificial intelligence and neural, cognitive, and neuromorphic computing are also being developed. Of course, these methods seek to imitate one hundred percent the behavior of human brain calculations in machines, so that they can convey most of the possible abilities and states. Meanwhile, using computational tools from various branches of mathematics such as algebra, analysis, and geometry is a completely different approach. Changing the basis of calculations from binary scalar states to vectors, matrices and other high-dimensional tools such as tensors and manifolds is a style that is very close to quantum states and gates. Of course, the origin of this style was trying to simulate states and quantum computing with mathematical algorithms on classical computers, as well as managing big data and multidimensional data with high dimensions.
For more information, please see:
  • asked a question related to Computer
Question
6 answers
I am quite confused about what formula to use to compute my sample size. I will be conducting a Sequential Explanatory design wherein my QUANT phase will make use of mediation analysis and my qual phase will be interpretative phenomenology. How can I determine the sample size? What is the best formula to use?
Relevant answer
Answer
Bruce Weaver A useful app but it appears that its focus (currently) is exclusively on power. I believe that a simulation that is run "from scratch" provides greater flexibility and yields a lot more information than just on power. For example, you can simulate the effects of non-normal and missing data and evaluate the performance of fit statistics, as well as parameter and standard error bias in addition to estimating power. Power estimations can be misleading when your SEM parameters or standard errors are biased due to, for example, an insufficient sample size.
  • asked a question related to Computer
Question
4 answers
For the investigation of the start-up of an alternating anaerobic-anoxic reactor, I would like compute the potential biomass decay for aerobic organisms (specifically autotrophic ammonia oxidizers (AOB, AOA)) in my inoculum. For this purpose, I am going to apply specific decay rates of AOB and AOA, but I am lacking knowledge about the autotrophic cell mass in my inoculum.
Can anybody provide average values or a ranges of the share of autotrophs in the MLVSS of typical conventional activated sludge systems?
Thank You already for advice!
Relevant answer
Answer
Ввиду того, что время пребывания в аэротэнке около суток, а время удвоения биомассы автотрофов, например хлореллы четверо сутоко, автотрофов мало.
  • asked a question related to Computer
Question
1 answer
Can't create job for the server: CASTEP.
This computer was unable to communicate with the computer providing the server.
Relevant answer
Answer
This error is due to the connection issue between your PC and the server. so your PC is not able to communicate with the server. Please verify the following
a) Restart your PC and try to run the simulation again.
2) Check the "server status" in License administrator (whether connected or not).
3) Check "server gateways" in server console (whether the server ip exists or not).
  • asked a question related to Computer
Question
5 answers
Hey! I want to know the hazard ratio for male vs female using keplan-meier survival command, however so far in SPSS they can only compute survival plot and statistic significance without telling me the Hazard Ratio, is there any way to compute it? (Using log rank, it only provides significance)
Relevant answer
Answer
Hello Fadhaa Aditya Kautsar Murti. I don't know where you got "Add more covariates/predictors" in Babyak's article. The second suggestion in that section was to combine predictors if there is a sensible way to do so. And doing that would reduce the number of degrees of freedom eaten up by predictors. Indeed, noticed what the first sentence of that section says:
"One obvious approach to preserve degrees of freedom is to
reduce the number of predictors in the model."
You asked about hazard rates, which implies that you have a survival model. For survival models, the limiting sample size is the number of events. Babyak's recommendation is that you have at least 10-15 events per explanatory variable degree of freedom. Nowadays, Frank Harrell recommends at least 20 EPV.
I hope this helps.
  • asked a question related to Computer
Question
1 answer
Is it possible to compute the peak-to-background ratio for the case of a doublespeak, that is when two peaks are not resolved completely?
So far we have (Figure attached), tying to get the p/B ration for the second peak (peak 1)
1.) Counts under peak 0
2.) Counts under peak 1
3.) Total counts under the combined region
As I understand merely subtracting the 3.) -1.) -2.) will give background counts for both peaks, however only background under peak 1 is required.
Thank You
Relevant answer
Answer
Use the background area under peak 1 as shown in the figure. It is possible to perform iterative fitting to refine the background, but unlikely you will reduce the uncertainty in the estimate to an important degree.
  • asked a question related to Computer
Question
4 answers
Hello i am trying to add my paper in computers and electronics journal but i cant seem to find their template anywhere in their system, and they mentioned that your paper your way, so if anyone has the journal template can share here please.
Thank You
Relevant answer
Answer
One solution is to use: https://converter.app/docx-to-latex/result.php?lang=en to convert docx file to latex.
  • asked a question related to Computer
Question
2 answers
Kindly assist with my challenge. I used stress/atom commands in LAMMPS to calculate Von miss stress and hydrostatic stress of a silicon nanometric cutting. Unfortunately or fortunately I got hydrostatic stress of maximum -3GPa and 1GPa which is low compared to 11GPa - 14GPa for Si and diamond tool with 204241 atoms. Note, the hydrostatic stress is for tool atoms group. I attached the stress part of the script. If there is a need to attach the entire script, I will.
Relevant answer
Answer
Daniel Jacob Bilar . Thanks for your insights. I will try your suggestions.
  • asked a question related to Computer
Question
1 answer
Hello everyone, I am simulating the DEP on the particle, I have some problems now:
  1. How to calculate the dielectrophoretic force on the particle? To show the value of the dielectrophoretic force in the results.
  2. Should I simulate all the studies at one time or just compute the one which is related to result that I want.
Please help me solve the problems, thank you.
Relevant answer
Answer
Quantifying the dielectrophoretic force in COMSOL involves setting up a simulation that includes the relevant physics and boundary conditions. Dielectrophoresis (DEP) is the phenomenon where a non-uniform electric field exerts a force on a dielectric particle. Here's a general guide on how to quantify the dielectrophoretic force using COMSOL:
  1. Geometry Setup:Define the geometry of your system. This could include electrodes, microfluidic channels, and dielectric particles. Create the geometry using COMSOL's geometry tools or import it from CAD software.
  2. Physics Settings:Add the Electric Currents physics interface to your model. Define the properties of the materials involved, including conductivity, permittivity, and relative permittivity. Enable the Dielectrophoresis interface from the AC/DC module. This interface allows you to simulate the dielectrophoretic force acting on the particles.
  3. Boundary Conditions:Define the boundary conditions for your simulation. This includes setting up the electrodes' potentials or applying an external electric field. Ensure that the boundary conditions correspond to the experimental setup you are trying to model.
  4. Meshing:Generate a mesh for your geometry. The mesh should be fine enough to capture the details of the electric field distribution accurately, especially near the electrodes and the particles.
  5. Solver Settings:Choose an appropriate solver and set the solver settings according to your simulation requirements. For dielectrophoresis simulations, a stationary solver coupled with an AC frequency solver is often used.
  6. Particle Properties:Specify the properties of the dielectric particles, including their size, shape, and dielectric properties.
  7. Post-Processing:After running the simulation, use COMSOL's post-processing tools to visualize the results. You can visualize the electric field distribution, particle trajectories, and calculate the dielectrophoretic force acting on the particles.
  8. Quantifying Dielectrophoretic Force:Once you have the simulation results, you can quantify the dielectrophoretic force acting on the particles. This can be done by analyzing the particle trajectories and calculating the force exerted on the particles by the non-uniform electric field. COMSOL provides tools for post-processing, including particle tracing and force calculation, which can help you quantify the dielectrophoretic force accurately.
  9. Validation:Validate your simulation results by comparing them with experimental data if available. Adjust parameters and settings as necessary to improve the accuracy of your simulations.
By following these steps and utilizing COMSOL's capabilities for simulating dielectrophoresis, you can effectively quantify the dielectrophoretic force in your system.
  • asked a question related to Computer
Question
1 answer
Hi everyone, my computer don't recognize the nanodrop and shows an error Code 10. Could be a driver error or a damage in hardware? Thanks for your help! :)
Relevant answer
Answer
To accurately determine the reason for the yellow light and ensure the proper functioning of your NanoDrop 8000, I recommend consulting the instrument's user manual or contacting the manufacturer's technical support for assistance. They can provide specific guidance based on the instrument's design and any error codes or messages displayed.
  • asked a question related to Computer
Question
4 answers
Hello,
I am quantifying bacteria exposed or not to a certain chemical at specific time points.
How can I compute the results in a relative manner? That is: what is the formula for a ∆∆Ct analysis of bacteria?
Relative quantification is normally given for a gene of interest over one or more reference genes by calculating the cycle difference between the exposed and not-exposed genes and then between GOI and REF.
For bacteria, would these be acceptable:
Ct = Ctt – Ct0
∆∆Ct = ∆Ct (exposed) – ∆Ct (control)
where t is a specific time (0,1,2,4,24 h post-exposure) and 0 is t=0. The fold increase would then be 2–∆∆Ct.
Is this correct?
Thank you
Relevant answer
Answer
The quantitative cycle is proportional to the bacterial density...
  • asked a question related to Computer
Question
2 answers
I’ve been trying to login multiple times but, I keep failing.
Relevant answer
Answer
1. Technical problems: The platform might be going through maintenance or having technical problems, which could interfere with the login procedure. It is best to ask for help from the platform's support staff in such circumstances.
2. Incorrect login information: Verify that the username and password you are entering are correct. In the event that you lose your login information, you can typically utilise the platform's password recovery feature to get back in.
3. Account activation: To use certain platforms, users must go through a verification process, like clicking on an email link, to activate their accounts. Your ability to log in may be blocked if the activation process is not finished.
4. Browser or device compatibility: Some platforms might need particular web browsers or devices to work properly, or they might have compatibility problems. To see if that fixes the login problem, try utilising a different browser or device.
5. Security measures: A platform may occasionally make the login process more difficult if it has added extra security measures like two-factor authentication (2FA). In order to successfully finish the authentication process, make sure you adhere to the given instructions.
For help, I suggest contacting Qeios customer support if you are having trouble logging in consistently. They will be able to answer any questions you might have and offer you specific advice.
@salomebukachi
  • asked a question related to Computer
Question
3 answers
Hi everyone. I am actually very new to protein computational so I have zero idea how do I install AlphaFold 2 to be run on my computer.
I heard that if you install AlphaFold + all required dependencies from source code, it would be significantly faster to run compared when you using a (pre-built) container.
I don't really have any idea on how those two works. If anyone could help me with this, I would be really grateful.
Does any of you know how to do this?
Relevant answer
Answer
You don't need to install on your computer. The easiest option would be ColabFold.
  • asked a question related to Computer
Question
3 answers
details:
MIPS USB Cameras provide a quick and easy means of displaying and capturing high-quality video and images on any USB 2.0-equipped desktop or laptop computer running a supported Microsoft® OS.
please send me.
thanks
Karthick
Relevant answer
Answer
plz download and use
  • asked a question related to Computer
Question
3 answers
I want to detect multivariate outliers in my dataset, which contains participant responses to various questionnaires such as DASS-21, PSWQ etc. Should I compute Mahalanobis distance using total scores for constructs such as depression, anxiety and worry or should I use item-level data from the questionnaires before aggregating them into their total scores? When using item-level data, 25 participants are detected as outliers among 370 participants, but when using the total scores, only 1 participant is detected as multivariate outlier.
Relevant answer
Answer
@Jennifer Santos , As I did not use chat GPT, I do not have access to this platform at all. Because it is banned and sanctioned in our country. But I want to know what is your problem?
  • asked a question related to Computer
Question
1 answer
Relevant answer
Answer
The question is a bit older, but nonetheless :)
A few comments: Products have acquisition and disposal costs, but also for maintenance, support, updates/modifications and obsolescence. This is also part of the function. Products consist of parts (HW and SW) that can fail in several ways. The fixing causes the costs. Costs also come from lost revenue because of non-availability. So, I suggest to reformulate as Asset management question. Costs can be calculated via tasks/services that are required to restore a function.The functions of systems/parts drive the costs. A part failure, may not result in function failure, so it is irrelevant.
For methodology, have a look at www.s3000L.org, www.s4000P.org and the downloads of www.sx000i.org. Specs are for free, updates scheduled mid this year.
Tools are from TFD in UK (Vmetric) or Systecon Toolsuite from Sweden (OPUS10, etc). The math is not very complex of you know math, focus on costs vs availability. You may find some tools in the US as well.
ISO15288 on Systems Engineering provides the engineering perspective.
  • asked a question related to Computer
Question
1 answer
For a study on anthropomorphic design of AI and trust, we are currently looking for established questionnaires regarding the perception of the role of computers in human-machine teams.
A primary example would be that the machine is perceived as "a teammate vs a tool".
Feel free to also answer with questionnaires on related constructs.
Thanks in advance and kind regards,
Martin
Relevant answer
Answer
Hi Martin, funnily enough, someone just pointed me to this paper:
The godspeed questionnaire or some of its scales might be useful for what you plan.
LG aus Bonn :-)
  • asked a question related to Computer
Question
1 answer
I would like to create visually appealing diagrams for my publications. For instance, to create graphical abstracts for articles, I want to know which computer tool can assist me in this regard.
Thank you in advance for your responses.
Relevant answer
Answer
Personally I used Biorender recently. It is a rather user-friendly tool (with limitations). You can try for free if it suits you. The only real drawback, it is not for free if you need publication quality pictures.
Best regards.
  • asked a question related to Computer
Question
4 answers
I think SPSS made my computer slow. Be careful when run in your computer.
Relevant answer
Answer
R + emacs
  • asked a question related to Computer
Question
1 answer
Hello, dear community! ,
I am working on gamma-ray spectrometry data to delineate K-enrichment areas. I have created the maps using Geosoft, but I get errors when I use the K/eTh ratio (a map with few contours). I don't know what I am doing wrong.
Can anyone please help me?
Thank you
Relevant answer
Answer
There could be several reasons why you're encountering difficulties calculating the K/eTh ratio or F-parameter in your gamma-ray spectrometry data for K-enrichment delineation. Here are some possibilities to explore:
Data Issues:
  • Missing or incorrect channels: Ensure your data includes the necessary channels for potassium (K-40), thorium (eTh), and possibly other relevant elements like uranium (U-238).
  • Calibration errors: Verify that your data is properly calibrated for energy and intensity. Inaccurate calibration can lead to distorted peaks and incorrect elemental concentrations.
  • Background subtraction: Check if the background spectrum has been adequately subtracted. Residual background can interfere with peak analysis and affect ratio calculations.
Processing Errors:
  • Formula mistake: Double-check the formula you're using for calculating the K/eTh ratio or F-parameter. Ensure you're using the correct channel numbers and units.
  • Data manipulation errors: Verify that you haven't inadvertently applied any filters or transformations that might affect peak areas or ratios.
  • Software limitations: Check if Geosoft has any specific limitations or requirements for calculating these parameters. Refer to the software's documentation or consult their support team.
Interpretation Issues:
  • Low K concentrations: If the K concentration in your data is low compared to eTh, the K/eTh ratio might have few contours due to insufficient contrast. Consider using alternative parameters like K/U or Th/U ratios that might be more sensitive to K variations.
  • Geological factors: The distribution of K and eTh in your study area might not be spatially correlated, making the K/eTh ratio less informative for delineating K-enrichment zones. Consider incorporating other geological and geochemical information into your analysis.
Additional Tips:
  • Consult the literature: Refer to relevant publications on gamma-ray spectrometry data processing and K-enrichment delineation for specific techniques and best practices.
  • Seek expert advice: Consider consulting with a geophysicist or geologist experienced in gamma-ray spectrometry data analysis for troubleshooting and interpretation.
  • Share more details: If possible, provide more information about your data (e.g., data format, acquisition method, study area), processing steps, and specific error messages you encounter. This will help in providing more targeted guidance.
By systematically investigating these potential causes and seeking additional resources, you should be able to identify the source of your problem and successfully calculate the K/eTh ratio or F-parameter for your K-enrichment delineation project.
  • asked a question related to Computer
Question
1 answer
A human can only aspire to fluency in so many different languages before mixing up words due to code switching. Thus, MAYBE those who cannot learn so many languages turn to linguistics and coding to earn money.
Relevant answer
Answer
Some ideas and associations:
You state “human can only aspire to fluency in so many different languages before mixing up words due to code switching”. I don’t know if this is true at all. On what research is it based? In my own situation, I am fluent in 5 languages and do not mix up words or get confused to which language a word belongs to.
Fluency in language corresponds to numeracy, that has been demonstrated. Children who are read a lot of stories in pre-school, for example, were no better in general subjects compared to other children, but they were better at maths later on. Both implicate logical thinking. Without logical thinking humans cannot string words into a longer narrative either.
People choosing coding or computer sciences may prefer to work individually and not in groups. That is a different dynamic, social vs solo, than proficiency at languages.
  • asked a question related to Computer
Question
1 answer
how to model weir /barrage and import to open foam
which type of solver is best convenient for fluid flow
Relevant answer
Answer
Hello, I suggest you start with the basics of OpenFOAM. You can find lots of resources on the official pages. See https://www.openfoam.com/documentation/user-guide or https://doc.cfd.direct/openfoam/user-guide-v11/index. There is many more when you google it.
As regards the solver, for your application you could use some of the multiphase solvers, e.g. interFoam. Look for a dam break tutorial, it could be the right solver for you and then adjust it to your needs.
Good luck! Jan
  • asked a question related to Computer
Question
2 answers
Suppose I compute a least squares regression with the growth rate of y against the growth rate of x and a constant. How do I recover the elasticity of the level of y against the level of x from the estimated coefficient?
Relevant answer
Answer
The elasticity of y with respect to x is defined as the percentage change in y resulting from a one-percent change in x, holding all else constant. In the context of your regression model, where you have regressed the growth rate of y (which can be thought of as the percentage change in y) against the growth rate of x (the percentage change in x), the estimated coefficient on the growth rate of x is an estimate of this elasticity directly.
Here's why: If you run the following regression:
Δ%y=a+b(Δ%x)+ϵ
where Δ%y is the growth rate of y (dependent variable), Δ%x is the growth rate of x (independent variable), a is the constant term, b is the slope coefficient, and ϵ is the error term, the coefficient b represents the change in Δ%y for a one-unit change in Δ%x. Because Δ%y and Δ%x are already in percentage terms, the coefficient b is the elasticity of y with respect to x.
So, if you have estimated the coefficient b from this regression, you have already estimated the elasticity. There is no need to recover or transform the coefficient further; the estimated coefficient b is the elasticity of y with respect to x.
It's important to note that this interpretation assumes that the relationship between y and x is log-linear, meaning the natural logarithm of y is a linear function of the natural logarithm of x, and the model is correctly specified without omitted variable bias or other issues that could affect the estimator's consistency.
  • asked a question related to Computer
Question
3 answers
Hi Everyone,
I want to know some of the best tools or platforms for planning research works (PhD or/and Master's research). Till now I have been using MS Excel to schedule my work, but it is not smart. So, I want to know if a tool or platform is there to use as a daily, weekly, or monthly planner. It can be free or paid; app-based (which means I have to download it on my computer and use it), web-based (which means it's only available online), or both; licensed or unlicensed; AI-based or without AI assistant.
I will give some options for our research community to comment in the chat below. The options are:-
1. MS-Excel
2. Google Calendar
3. MS-Word (making tables)
4. Desktop apps (with AI assistant)
5. Desktop apps (without AI assistant)
6. Web-based apps (with AI assistant)
7. Web-based apps (without AI assistant)
8. No digital planner, only personal diary
9. Other ways
If your option is other ways please mention.
Thank you.
Regards,
Kaustav Sengupta
#plannertool #tool #researchtool #researchplanner #dailyplanner #weeklyplanner #monthlyplanner #schedule #plannerapp #ai #aicommunity #plannerly #researchers #research #researchcommunity #researchstudy #researchmethods #researchassistant #elsevier #researchjourney
Relevant answer
Answer
The choice of tools or platforms for planning research works can depend on various factors, including the nature of your research, collaboration needs, and personal preferences. Here are some widely used and versatile tools/platforms for planning and organizing research:
  1. Reference Management Software:Zotero, Mendeley, EndNote: These tools help you organize and manage your references, citations, and bibliographies. They can be especially useful for academic and scientific research.
  2. Note-Taking and Organization:Evernote, Microsoft OneNote: These tools are versatile for note-taking, organizing thoughts, and collecting research materials. They often support multimedia elements and can be synchronized across devices.
  3. Project Management Tools:Trello, Asana, Jira: These tools are effective for managing tasks, setting deadlines, and collaborating with team members. They provide visual boards for organizing and tracking progress.
  4. Mind Mapping Software:XMind, MindMeister: Mind maps can be helpful for brainstorming, structuring ideas, and visualizing connections between concepts in your research.
  5. Collaborative Writing Platforms:Google Docs, Overleaf: These platforms facilitate collaborative writing, allowing multiple authors to work on a document simultaneously. Overleaf is particularly geared towards LaTeX-based scientific writing.
  6. Research Notebooks:Microsoft OneNote, Jupyter Notebooks: Jupyter Notebooks are popular in data science and coding-heavy research, while OneNote is more versatile for general note-taking.
  • asked a question related to Computer
Question
3 answers
what if i worship my computer as Moses had opposed? has Abraham evolved?
or, are we relegated to a truth where "GOD" is Good = Savagery?
no. right?
so, the context becomes if we worship the knowledge gained through computers, then we are adding on AI, and semantically crossing the line between church and state, no?
this put the Law of Artificial Intelligence on it's head.
what comes of this though is a merger between Metadata, and FEAR itself.
no joe king....
#616 = 216 @666 #Quakes ^ARE *Cumming %HOW (Way: ISIsis)
Relevant answer
Answer
Adnan, care to complete your 1/2 baked thought?
  • asked a question related to Computer
Question
8 answers
Hi, I recently installed a small windows HPC cluster (1 head node and 4 compute nodes) in my lab and i want to run Materials Studio on it. I am wondering if anyone has the experience in running Materials studio on Windows HPC cluster? (I would appreciate a detailed answer)
Thanks!
Relevant answer
Answer
Good day all, please how do I request for a license file for materials studio software?
  • asked a question related to Computer
Question
2 answers
Hello all, I want to compute the Energy drain with ns2. Can you help please?
Relevant answer
Answer
I want to compute the drain of energy in cc file
  • asked a question related to Computer
Question
1 answer
If an activation function has a jump discontinuity, then in the training process, can we implement backpropagation to compute the derivatives and update the parameters?
Relevant answer
Answer
Yes, because what matters isn't the activation function, but the cost function.
  • asked a question related to Computer
Question
5 answers
Is there any resemblance in how "information flow" happens in humans (Molecular Biology/Neurology) and computers (Computer Networking)?
This question came to be after a note was made by a Computer Networking (CN) lecturer that no new methodologies for information flow in CN to be discovered.
I got interested to look for the answer in the Central Dogma of Molecular Biology, and still awaits further insights.
Relevant answer
Answer
Absolutely, there are some interesting parallels between how information flows in humans and in computers, although the mechanisms are quite different.
In humans, at the molecular biology level, information flow is governed by the Central Dogma, where DNA is transcribed into RNA, and then RNA is translated into proteins. These proteins are crucial for various cellular functions. In neurology, information flow involves neurons transmitting electrical and chemical signals across synapses, essentially how our brain communicates and processes information.
In computers, information flow happens through computer networking, where data is transferred using protocols over various types of physical and wireless networks. This is akin to a structured set of rules determining how data packets are sent, received, and interpreted.
The resemblance lies in the fundamental concept of transmitting information. In both humans and computers, information is encoded, transmitted, received, and then decoded. In humans, it's more about biochemical and electrical signals, while in computers, it's about digital data packets.
However, it's important to remember that the underlying processes and materials (biological vs. electronic) are fundamentally different. It's a fascinating area of study, with each field having its own complexities and advancements!
  • asked a question related to Computer
Question
3 answers
Scopus journals
Relevant answer
Answer
Here are some Scopus-indexed journals related to computer science that were generally well-regarded:
1. IEEE Transactions on Computers
2. ACM Transactions on Computer Systems
3. Journal of Computer Science and Technology
4. Information Sciences
5. Journal of Computer and System Sciences
6. Journal of Computer Science and Technology
7. Journal of Artificial Intelligence Research
8. Pattern Recognition
9. Expert Systems with Applications
10.Journal of Network and Computer Applications
It's advisable to check the latest Scopus database or journal rankings for the most up-to-date information. Additionally, the appropriateness of a journal may depend on the specific subfield within computer science that you're interested in. Always review the scope and focus of the journal to ensure it aligns with your research area.
  • asked a question related to Computer
Question
13 answers
The future of blockchain-based internet solutions
Blockchain is defined as a decentralized and distributed database in the open source model in a peer-to-peer internet network without central computers and without a centralized data storage space, used to record individual transactions, payments or journal entries encoded using cryptographic algorithms.
In current applications, blockchain is usually a decentralized and dispersed register of financial transactions. It is also a decentralized transaction platform in a distributed network infrastructure. In this formula, blockchain is currently implemented into financial institutions.
Some banks are already trying to use blockchain in their operations. if they did not do it, other economic entities, including fintechs, implementing blockchain could become more competitive in this respect. However, cryptocurrencies and a secure record of transactions are not the only blockchain applications. Various potential blockchain applications are being considered in the future.
Perhaps these new, different applications already exist in specific companies, corporations, public institutions or research centers in individual countries. In view of the above, the current question is: In what applications, besides cryptocurrency, blockchain in your company, organization, country, etc.?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Relevant answer
Answer
A. Abusukhon, Z. Mohammad, A. Al-Thaher (2021) An authenticated, secure, and mutable multiple-session-keys protocol based on elliptic curve cryptography and text_to-image encryption algorithm. Concurrency and computation practice and experience. [Science Citation Index].
A. Abusukhon, N. Anwar, M. Mohammad, Z., Alghanam, B. (2019) A hybrid network security algorithm based on Diffie Hellman and Text-to-Image Encryption algorithm. Journal of Discrete Mathematical Sciences and Cryptography. 22(1) pp. 65- 81. (SCOPUS). https://www.tandfonline.com/doi/abs/10.1080/09720529.2019.1569821A. Abusukhon, B.Wawashin, B. (2015) A secure network communication protocol based on text to barcode encryption algorithm. International Journal of Advanced Computer Science and Applications (IJACSA). (ISI indexing). https://thesai.org/Publications/ViewPaper?Volume=6&Issue=12&Code=IJACSA&Seri alNo=9
A. Abusukhon, Talib, M., and Almimi, H. (2014) Distributed Text-to-Image Encryption Algorithm. International Journal of Computer Applications (IJCA), 106 (1). [ available online at : https://www.semanticscholar.org/paper/Distributed-Text-to-Image-Encryption-Algorithm-Ahmad-Mohammad/0764b3bd89e820afc6007b048dac159d98ba5326]
A. Abusukhon (2013) Block Cipher Encryption for Text-to-Image Algorithm. International Journal of Computer Engineering and Technology (IJCET). 4(3) , 50-59. http://www.zuj.edu.jo/portal/ahmad-abu-alsokhon/wpcontent/uploads/sites/15/BLOCK-CIPHER-ENCRYPTION-FOR-TEXT-TO-IMAGE ALGORITHM.pdf
A. Abusukhon, Talib, M. and Nabulsi, M. (2012) Analyzing the Efficiency of Text-to-Image Encryption Algorithm. International Journal of Advanced Computer Science and Applications ( IJACSA )(ISI indexing) , 3(11), 35 – 38. https://thesai.org/Publications/ViewPaper?Volume=3&Issue=11&Code=IJACSA&Seri alNo=6
A. Abusukhon, Talib M., Issa, O. (2012) Secure Network Communication Based on Text to Image Encryption. International Journal of Cyber-Security and Digital Forensics (IJCSDF), 1(4). The Society of Digital Information and Wireless Communications (SDIWC) 2012. https://www.semanticscholar.org/paper/SECURENETWORK-COMMUNICATION-BASED-ON-TEXT-TO-IMAGE-Abusukhon-Talib/1d122f280e0d390263971842cc54f1b044df8161
  • asked a question related to Computer
Question
1 answer
Hello everyone
I need to download an older version of Optisystem (Optisystem 13), but I can't seem to find a download link for this version. I had this version on my computer, but it crashed. Is there any way to download it again?
Relevant answer
Answer
Maybe it will help you.
  • asked a question related to Computer
Question
3 answers
Things are likely to become yet more complex as the use of artificial intelligence by artists becomes more widespread, and as machines get better at producing creative works, further blurring the distinction between artwork that is made by a human and that made by a computer. So, here the question arises, Whether computers should be given status?
Relevant answer
Answer
No. Can a computer or any other device smell? NO
Smell is one of the very important faculty.
  • asked a question related to Computer
Question
2 answers
Hi! DEA newbie here. How do I compute the technical efficiency of DMUs in Microsoft Excel if I have 2 or more outputs? I'm going for an output-oriented DEA with VRS assumptions, as suggested. Thank you very much.
Relevant answer
Answer
1. Set up your Excel spreadsheet: Create a table with the DMUs in the rows and the input and output variables in the columns.
2. Calculate the normalized values: Normalize the input and output variables to ensure that they are on a comparable scale. This can be done using various methods such as min-max normalization or z-score normalization. Apply the chosen normalization method to each variable column.
3. Set up the DEA model: Create a new worksheet or section in your spreadsheet to set up the DEA model. In this section, you will define the linear programming problem for each DMU.
4. Specify the objective function and constraints: In the DEA model section, set up the objective function and constraints based on the VRS assumptions. The objective function maximizes the efficiency score for each DMU, and the constraints ensure that the DMUs are efficient relative to each other. These constraints typically involve the normalized input and output values.
5. Solve the DEA model: Use Excel's Solver add-in or any linear programming solver tool to solve the DEA model. Set the solver to maximize the objective function while satisfying the defined constraints.
6. Retrieve the efficiency scores: Once the DEA model is solved, the efficiency scores will be obtained as the solution. These scores represent the technical efficiency of each DMU relative to the others. You can copy the efficiency scores back to the original table or create a new column to display the efficiency scores.
Good luck: partial credit AI
  • asked a question related to Computer
Question
1 answer
i want journal scopus Q3 that not take too much time to give me the forst decision
Relevant answer
Answer
Visit the Scopus website (www.scopus.com)
  • asked a question related to Computer
Question
5 answers
What is the best Programming Language for a ninth grader that has no previous experiences in programming?
Relevant answer
Answer
1. Python: Python is often recommended as the first programming language for beginners. It has a clean and readable syntax, which makes it easier to understand and write code. It is used in fields such as web development, data analysis, artificial intelligence, and more.
2. Scratch: Scratch is a visual programming language developed by MIT. It uses a drag-and-drop interface that allows users to create interactive stories, animations, and games. Scratch is specifically designed for beginners and provides a fun and intuitive way to learn programming concepts. It helps students understand fundamental programming concepts like loops, conditionals, and variables.
Good luck
  • asked a question related to Computer
Question
4 answers
Do you know of any of this software that is compatible with Apple computers? I tried GelAnalyzer4, PyElph, and the free online gel analyzer, but it is not working anymore.
What would be the minimum amount of protein needed to detect heme by TMBZ (3,3,5,5′-tetramethylbenzidine)? I am using cytochrome C from equine heart (positive control), chitinase (negative control), and Z-ISO (unknown). I want to do a quick sporting on filter paper of the presence of heme.
I would greatly appreciate any suggestions.
Relevant answer
Answer
Manuele Martinelli, thank you for the notes. They help explore different ways to analyze it.
  • asked a question related to Computer
Question
4 answers
Computer desks must conduct electricity to the ground and be grounded, in order for the user not to endure an electric shock; the floor must also be grounded, and this applies to other furniture holding electrical appliances.
Relevant answer
Answer
Dear Doctor
"Grounding it's done for safety in case current leaks out the power supply IF it's broken and it passes the current to case then the case itself becomes dangerous if you touch it. If you have grounding the current will go to the ground instead of you."
  • asked a question related to Computer
Question
4 answers
Hi,
can someone pointing to a paper/reference that describe how to compute d-prime for data in a 2AFC/2IFC design?
Thank you
Luigi
Relevant answer
Answer
Reference: Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York: Wiley.
The book "Signal Detection Theory and Psychophysics" by Green and Swets is a classic reference in the field of psychophysics. It provides a comprehensive explanation of signal detection theory, including the computation of d-prime (d') for 2AFC/2IFC designs. It covers various topics related to signal detection theory, including threshold estimation, receiver operating characteristic (ROC) analysis, and the interpretation of d-prime values.
While the book was published in 1966, it remains a widely recognized and influential resource in the field. It may be helpful to consult more recent literature for any advancements or refinements in the computation of d-prime specifically for 2AFC/2IFC designs, as the field of psychophysics has evolved since the book's publication.
Hope it helps:credit AI
  • asked a question related to Computer
Question
6 answers
Does anyone want to collaborate?
I am looking for some people to collaborate on research with me. I focus on computer science-related topics (information security management or business informatics).
Relevant answer
Answer
I want, when
  • asked a question related to Computer
Question
3 answers
Hi,
I have a question regarding the derivation in the attached paper.
How did the author get eq. 27 by solving the dp/dx integrals?
When I computed integrals for dx/h^4 and dx/h^2 with limits from 0 to lambda, my resulting answer is 0 when I substitute the limits.
I am using
to calculate integrals
h = 1 + phi*(sin(2*pi*(x/lambda)))
Could anyone please help
I really appreciate any help you can provide.
regards,
Shannon
Relevant answer
Answer
Thank you so much! Let me relook into this problem
  • asked a question related to Computer
Question
3 answers
Our research is quasi-experimental. There are two groups to be tested under different teaching approaches however we don't know how many participants should be in a group.
Relevant answer
Answer
Without knowing more details about your research project, it is difficult to make meaningful statements here. Which research approaches in which domain do you want to test empirically? It cannot just be about the number of participants in the group. As a rule, certain preconditions and contextual conditions must be taken into account: What is the research question? What are the scientific objectives? Which specific teaching/ learning settings are to be evaluated: combined vs. shared, synchronous vs. asynchronous, individual or cooperative, etc.? Should it be a comparative study? Is a control group planned? These and other requirements must be clarified and defined. Then the research design can be precisely conceptualized.
  • asked a question related to Computer
Question
4 answers
عرف المصطلحات الأتية؟
الفيروسات
المعلومات
الحاسب الآلي
Relevant answer
Answer
Dear Doctor
[A computer virus is a type of malicious software, or malware, that spreads between computers and causes damage to data and software. Computer viruses aim to disrupt systems, cause major operational issues, and result in data loss and leakage.]
[Information is stimuli that has meaning in some context for its receiver. When information is entered into and stored in a computer, it is generally referred to as data. After processing -- such as formatting and printing -- output data can again be perceived as information. When information is compiled or used to better understand something or to do something, it becomes knowledge.]
[A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations automatically. Modern digital electronic computers can perform generic sets of operations known as programs. These programs enable computers to perform a wide range of tasks.]
  • asked a question related to Computer
Question
2 answers
Test the Hypothesis: "DIGITAL DENKEN" based on basic software "PLAI" (polytrope linguistic artificial intelligence) complies with EU-recommendations and EU-rules for multiple AI-softwares within one system.
We will make the software system available for examination. PLAI runs on a commercially available computer. DIGITAL DENKEN includes human workshops combined with software usage down to laptops. PLAI offers at least 12 different usage-approaches. Examine now one of them as a first step.
Relevant answer
Answer
Thank you very much. We will adress all of them whether they will examine PLAI. The last point we do since 2010 due to our own mindset and the influence of Prof. Rupert Lay. We started implementing this quite a time before the discussions about rules began. But we want an independent proof as assurance for investors. Even though our system needs much less money, energy, but affords a different way to program, ordinary investors are not interested.
  • asked a question related to Computer
Question
3 answers
"Which topics do you recommend for computer engineering with a focus on cyber security and deep learning, or any other hot topics suitable for a PhD degree in computer and communication engineering? I am in the early stages of my research and would appreciate any suggestions, including relevant papers. Additionally, I am seeking a co-supervisor to guide me throughout my research."
Relevant answer
Answer
Starting a PhD in Computer and Communication Engineering with a cybersecurity and deep learning concentration is thrilling. These cutting-edge subjects match your interests and include reading suggestions:
1. AI-Driven Cyber Threat Intelligence and Prediction: - Develop AI models to identify emerging cyber risks. This requires searching vast datasets for new attack patterns.
"Artificial Intelligence for Cybersecurity" by Yang et al.
2. Deep Learning for Malware Detection and Analysis: - Examining how deep learning might enhance malware detection, especially zero-day threats, by analysing code patterns and anomalies.
- "Malware Data Science: Attack Detection and Attribution" by Joshua Saxe and Hillary Sanders.
3. AI in Network Security and Anomaly Detection: - Creating deep learning models to identify network traffic anomalies, perhaps detecting intrusion attempts or odd activities.
- See "Network Traffic Anomaly Detection and Prevention" by Monowar H. Bhuyan, Dhruba K. Bhattacharyya, and Jugal K. Kalita.
4. Deep Learning in Cryptography and Secure Communications: - Investigating AI's role in cryptanalysis and inventing new cryptographic algorithms.
See Jean-Philippe Aumasson's "Serious Cryptography: A Practical Introduction to Modern Encryption".
5. AI-Enhanced IoT and Edge Computing Security Research: - Investigating AI-based security for IoT devices and infrastructure.
- See "Security and Privacy in Internet of Things (IoTs): Models, Algorithms, and Implementations" by Hu.
6. Privacy-Preserving AI in Cybersecurity: - Examining federated learning and differential privacy strategies to improve cybersecurity without compromising user privacy.
"Privacy-Preserving Machine Learning: Methods, Challenges, and Opportunities" by Xinjian Luo and Qiang Yang.
7. Addressing 5G Network Security Challenges with AI: - Examining AI solutions.
"5G Security: Fundamentals, Standards, and Practical Implementation" by Anand R. Prasad and Seung-Woo Seo.
8. Research on Explainable AI (XAI) in Cybersecurity: - Make AI judgements in cybersecurity more visible and explainable.
"Explainable AI: Interpreting, Explaining and Visualising Deep Learning" by Wojciech Samek et al.
9. Cyber-Physical Systems Security: - Investigating deep learning-based security solutions for critical infrastructures.
Houbing Song et al. edited "Cyber-Physical Systems: Foundations, Principles and Applications".
Find academics or professionals with good backgrounds in these areas as co-supervisors. They are available through academic departments, research papers, and cybersecurity and AI conferences and seminars.
These cutting-edge computer and communication engineering fields provide many PhD research options. The suggested academic library or journal database references might provide a theoretical and practical grasp of these domains.
  • asked a question related to Computer
Question
1 answer
I am trying to gather information about Green's functions for the steady Euler and Navier-Stokes equations, which are the linearized response to point source perturbations. The ultimate goal is to compute the force that these singular solutions exert on solid boundaries. This is a text-book classic exercise in the case of potential flow (i.e., force exerted by potential point sources or vortices on a circular cylinder, for example), and I would like to learn more about the analogous situation in compressible, inviscid flow (subcritical flow past an airfoil, for example) or incompressible, viscous flow (flow past a flat plate, for example).
Relevant answer
Answer
In fluid dynamics, Green's functions play a crucial role in solving steady flow problems governed by the Euler and Navier-Stokes equations. These functions, denoted by G(x, y) for a point source at (x, y), represent the perturbation in the velocity and pressure fields induced by a unit point force acting at that location.
Steady Euler Equation
For the steady Euler equation, which describes inviscid incompressible fluid flow, the Green's function satisfies the following equation:
∇² G(x, y) + δ(x, y) = 0
where δ(x, y) is the Dirac delta function representing the point source. The solution to this equation is given by:
G(x, y) = -1/(2π) ln(|x - y|)
This Green's function represents the velocity potential due to a unit point source in the steady Euler flow.
Steady Navier-Stokes Equation
For the steady Navier-Stokes equation, which incorporates the effects of viscosity, the Green's function satisfies a modified equation:
∇² G(x, y) + δ(x, y) - Re∇² G(x, y) = 0
where Re is the Reynolds number, a dimensionless parameter characterizing the flow regime. The solution to this equation is given by:
G(x, y) = K_0(Re |x - y|)
where K_0 is the modified Bessel function of the second kind of order zero. This Green's function represents the velocity perturbation due to a unit point force in the steady Navier-Stokes flow.
Applications
Green's functions are widely used in various fluid dynamics applications, including:
  • Solving steady flow problems: By superposing the contributions from multiple point sources, one can construct the velocity and pressure fields for complex flow geometries.
  • Analyzing flow singularities: Green's functions provide insights into the behavior of the flow near singularities, such as corners and cusps.
  • Investigating flow stability: Green's functions can be used to study the stability of steady flow solutions and identify potential sources of instability.
In summary, Green's functions serve as powerful tools for understanding and analyzing steady fluid flows governed by the Euler and Navier-Stokes equations. Their applications span a wide range of fluid dynamics problems, from fundamental theoretical studies to practical engineering applications.
tuneshare
more_vert
  • asked a question related to Computer
Question
2 answers
Hyperspectral Imaging, Hyperspectral Classification, Statistical Test
Relevant answer
Answer
Hi
There are several reasons why statistical tests might not be applied:
1. Sample Size and Variability.
2. Marginal Improvements.
3. Computational Complexity.
4. Focus on Other Metrics.
5. Methodological Preference.
6. Lack of Standardization.
generally considered good practice to apply statistical tests in such scenarios to rigorously validate the results
  • asked a question related to Computer
Question
2 answers
Hi all the eminent research community, wanted to do research on companies' sustainable reporting framework and efficiency, and for this, I need to calculate sustainable reporting scores. Can you please guide me on how to compute the same?
Relevant answer
Answer
Example of a Python program structure to calculate a Sustainable Reporting Score:
```python
def calculate_sustainable_reporting_score(data):
# Define your scoring methodology and calculation logic here
score = 0
# Perform calculations and update the score based on your criteria
# Example: Add scores based on different factors or indicators
score += data['factor1'] * weight_factor1
score += data['factor2'] * weight_factor2
score += data['factor3'] * weight_factor3
return score
# Example usage
data = {
'factor1': 0.75,
'factor2': 0.85,
'factor3': 0.9
}
# Define the weights for each factor based on your scoring methodology
weight_factor1 = 0.4
weight_factor2 = 0.3
weight_factor3 = 0.3
# Calculate the Sustainable Reporting Score
score = calculate_sustainable_reporting_score(data)
# Print the result
print("Sustainable Reporting Score:", score)
```
In the above example, the `calculate_sustainable_reporting_score()` function takes in a `data` dictionary as input, which contains the factors or indicators you want to consider in your scoring methodology. You can define your own factors and assign weights to each factor based on their importance in the overall score calculation.
The function performs the calculations based on your specific methodology and returns the resulting score.
In the example usage section, I've provided sample data and weights for three factors (`factor1`, `factor2`, and `factor3`). You can modify this section to input your own data and weights.
Hope it helps
  • asked a question related to Computer
Question
2 answers
Hello,
I am trying to decide a cut-off value (probably equally to "change-point") in an ELISA assay, using R package. To make our assay convinient, I do not set either negatve or positive control in every plate.
A reference paper (doi: 10.1590/0074-02760160119) indicates a package saying "the Pruned Exact Linear Time (PELT) algorithm was selected (Killick et al. 2012) with the CUSUM method as detection option (Page 1954). The PELT algorithm can therefore rapidly detect various change-points in a series. The CUSUM method
is based on cumulative sums and operates as follow: The absorbance values x are ordered in ascending values (x1,…xn) and sums (S) are computed sequentially as S0 = 0, Si+1 = max (0, Si + xi - Li), where Li is the likelihood function. When the value of S exceeds a threshold, a change-point has been detected."
I am not sure how to create such a code and could not find a package on the Internat. If someone knows or experienced the decision of "change-point" using R, please tell me hot to do that.
Thank you.
Relevant answer
Answer
example of how you can achieve this using the R programming language:
```R
# Install and load the required packages
install.packages("changepoint")
library(changepoint)
# Generate example ELISA assay data
# Replace with your actual ELISA assay data
data <- c(0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5)
# Perform change-point analysis
cpt <- cpt.mean(data, method = "BinSeg")
# Get the estimated change-point
change_point <- cpt@cpts[1]
# Print the estimated change-point
cat("Estimated Change-Point:", change_point, "\n")
```
`changepoint` package is used to perform change-point analysis on the ELISA assay data. First, you need to install the package using `install.packages("changepoint")`. Then, you can load the package with `library(changepoint)`.
Replace the `data` vector with your actual ELISA assay data. This vector should contain the measured values from your assay.
The `cpt.mean` function is used to perform the change-point analysis using the "BinSeg" method, which is a popular algorithm for detecting changepoints in data. You can explore other available methods in the `changepoint` package documentation.
The estimated change-point is extracted from the analysis result using `cpt@cpts[1]`, assuming there is a single change-point detected. If you expect multiple change-points, you can access them accordingly.
Hope it helps
  • asked a question related to Computer
Question
2 answers
Hello,
I have performed MM-PBSA calculation of a protein-ligand complex. I utilised approx 750 frames (out of a total of 15000 frames) to compute the free enrgy change of binding. Then, I used a python code to compute ACF of the total delta G binding. But, the obtained ACF plot is not showing exponential decay feature exactly. I am not able to figure it out. I am attaching my plot here.
Any suggestions would be highly appreciated.
Relevant answer
Answer
you need a dataset that contains the free energy values at different time points. Assuming you have such a dataset, here's an example Python code that calculates the auto-correlation function using the `numpy` library:
```python
import numpy as np
def auto_correlation(data):
"""
Calculates the auto-correlation function of a given dataset.
Parameters:
data (numpy array): 1D array of free energy values.
Returns:
acf (numpy array): Auto-correlation function.
"""
n = len(data)
mean = np.mean(data)
data_normalized = data - mean
acf = np.correlate(data_normalized, data_normalized, mode='full')[-n:]
acf /= (n * np.var(data))
return acf
# Example dataset (replace with your own data)
free_energy_data = np.array([1.2, 1.5, 2.1, 2.5, 2.8, 2.9, 2.7, 2.4, 2.0, 1.8])
# Calculate auto-correlation function
acf = auto_correlation(free_energy_data)
# Print the auto-correlation values
print(acf)
```
Replace as needed. Hope it helps
  • asked a question related to Computer
Question
3 answers
I am using ELK-FPLAPW code to compute band structure of some materials. From the output files, I can plot l (or m_l) resolved band structure using Origin-Pro. But since I have the license of this product for a limited period of time I want to use xmgrace, which is free, to plot the same.
Has anyone done it? If yes then could you guide me or point me toward appropriate resources.
Relevant answer
Answer
Organize Data into Separate Files:
  1. For each l-value (0, 1, 2, etc.), create a separate file containing k-points, energy, and the corresponding l-character values.
  2. Each file should contain two or three columns, with the first column being k-points, the second column being energy, and optionally, the third column representing the l-character values.
After organizing your data into separate files, you can use xmgrace to plot the band structures for different l-values.
  1. Open xmgrace and go to "Data" > "Import" to load the data files corresponding to each l-value.
  2. Select "Set Type" as XY and choose the columns containing k-points and energy for your x and y axes, respectively.
To incorporate the l-character or weightage into the plot, you can utilize xmgrace's features such as:
Plot Multiple Datasets: Plot each l-value as a separate dataset (using different colors or line types for distinction).
Legend: Label each dataset with its respective l-value for clear identification.
Error Bars/Additional Data Columns: Utilize extra columns to represent the l-character value, for instance, by varying symbol size or color according to this value.
Customize the plot appearance, including axis labels, title, legends, and other formatting options as needed to enhance readability and presentation.
Once you have set up the plot to represent each l-value separately, you can compare and analyze the band structures for different l-values.i hope i helped.
  • asked a question related to Computer
Question
4 answers
I modelling seepage analysis using SEEP/w and the result is XY gradient contour out of the phreatic line? And i think it's not make sense, so how i compute SF for boiling if the exit gradient computed by SEEP/w is wrong?
Every answer would be appreciated. Thanks before.
Relevant answer
Answer
In my thesis, I compared in 2d and 3D mode. I explained it in my article “2D and 3D Modeling of Transient Seepage from Earth Dams Thorough Finite Element Model (Case Study: Kordaliya Dam)”. The article is my profile. The seep 3D is drawn more accurate in semi saturated soils.
  • asked a question related to Computer
Question
5 answers
who is refered to as the father of computers
Relevant answer
Answer
Dear Charles Waruru,
the computer had many fathers. Of these are particularly noteworthy:
Konrad Zuse, John von Neumann, Alan Turing, Charles Babbage, John Atanasoff, Henry Edward Roberts
For more information about these computer fathers see the added addresses.
Best regards
Anatol Badach
Konrad Zuse
John von Neumann
Alan Turing
Charles Babbage
John Atanasoff
Henry Edward Roberts; Father of Personal Computer
  • asked a question related to Computer
Question
2 answers
Hello everyone
I need to compute in MATLAB both the Lyapunov exponents for 2D discrete-chaotic Maps
For example the 2D Logistic map defined as:
x(n+1)=r*(3*y(n)+1)*x(n)*(1-x(n));
y(n+1)=r*(3*x(n)+1)*y(n)*(1-y(n));
where, r = 0.4: 0.01: 1.2
I am comfortable with computing the LE for any 1D discrete chaotic map like 1D logistic map with that differentiation method. But, I am confused for its 2D version. Is there any method of computing its two LEs using its generated two time series ( for x-series and y-series)? or How that differentiation method be extended for this 2D version?
Please help and provide the sample Matlab code (for any discrete 2D chaotic map) if anyone is having it. I shall be highly thankful for the kind help and guidance.
Best Regards,
E. Mehallel
Relevant answer
Answer
Contacter sur email
  • asked a question related to Computer
Question
5 answers
Ambient Intelligence vs Internet of Things? What is Similarities and Differences?
Relevant answer
Answer
Ambient Intelligence (AmI) and Internet of Things (IoT) are two concepts that have gained significant attention in the field of technology. While they share some similarities, there are also distinct differences between the two.
Ambient Intelligence refers to a computing environment that is sensitive and responsive to human presence. It aims to create an intelligent and intuitive system that can adapt to users' needs without explicit instructions. AmI systems utilize sensors, data analysis, and machine learning algorithms to provide personalized services in a seamless manner. For example, smart homes equipped with AmI technology can adjust lighting, temperature, and music preferences based on individual preferences.
On the other hand, the Internet of Things refers to a network of physical objects embedded with sensors, software, and connectivity capabilities. IoT enables these objects to collect and exchange data over the internet without human intervention. The main goal of IoT is to connect various devices for efficient communication and automation. For instance, IoT can be seen in applications like smart cities where streetlights automatically adjust their brightness based on real-time traffic conditions.
Although both AmI and IoT involve interconnected devices and rely on data collection for decision-making processes, there are key differences between them. Firstly, while AmI focuses on creating an intelligent environment that adapts to humans' needs seamlessly, IoT emphasizes connecting devices for efficient communication without direct human involvement.
Secondly, AmI systems primarily rely on local processing capabilities within the environment itself. This means that most of the data processing occurs within the immediate vicinity of users or devices. In contrast, IoT systems often rely on cloud computing for storing and analyzing large amounts of data collected from multiple sources.
Lastly, another difference lies in their scope of application. Ambient Intelligence has a more personal focus as it aims at providing personalized services tailored specifically for individuals or small groups. On the other hand, IoT has broader applications ranging from industrial automation to healthcare monitoring systems.
In conclusion, Ambient Intelligence (AmI) and Internet of Things (IoT) are two distinct concepts in the field of technology. While they share similarities in terms of interconnected devices and data collection, their focus, processing capabilities, and scope of application differ significantly. Both concepts have the potential to revolutionize various industries and improve our daily lives.
Reference:
Kidd, C.D., Orr, R.J., Abowd, G.D., Atkeson, C.G., Essa, I.A., MacIntyre, B., Mynatt E.D. & Starner T.E. (1999). The Aware Home: A Living Laboratory for Ubiquitous Computing Research. In Proceedings of the Second International Workshop on Cooperative Buildings (CoBuild '99), 191-198.
  • asked a question related to Computer
Question
7 answers
could involve security
Relevant answer
Answer
Hi Noelle,
Please take a look on my papers (Hope these papers may help you deciding on a specific topic):
■ A. Abusukhon Intelligent Shoes for Detecting Blind Falls Using the Internet of Things. KSII Transactions on Internet and Information Systems. Vol. 17, Issue 9. 2023
■ A. Abusukhon, A. Al-Fuqaha, B. Hawashin, A Novel Technique for Detecting Underground Water Pipeline Leakage Using the Internet of Things. Journal of Universal Computer Science (JUCS). Vol. 29, No. 8.
■ A. Abusukhon, IOT Bracelets for Guiding Blind People in an Indoor Environment, in Journal of Communications Software and Systems, vol. 19, no. 2, pp. 114-125, April 2023, doi: 10.24138/jcomss-2022-0160.
■ A. Abusukhon (2021) Towards Achieving a Balance between the User Satisfaction and the Power Conservation in the Internet of Things, IEEE Internet of Things Journal, doi: 10.1109/JIOT.2021.3051764. impact factor 9.936. Published by IEEE. https://ieeexplore.ieee.org/document/9326414. [Science Citation Index].
■ Ahmad Abusukhon, Bilal Hawashin and Mohammad Lafi (2021) An Efficient Algorithm for Reducing the Power Consumption in Offices Using the Internet of Things, International Journal of Advances in Soft Computing and its Applications (IJASCA). http://ijasca.zuj.edu.jo/Volumes.aspx
■ A. Abusukhon, Z. Mohammad, A. Al-Thaher (2021) An authenticated, secure, and mutable multiple-session-keys protocol based on elliptic curve cryptography and text_to-image encryption algorithm. Concurrency and computation practice and experience. [Science Citation Index].
■ B. Hawashin, A. Abusukhon An Efficient Course Recommender Using Deep Enriched Hidden Student Aptitudes. ICIC Express Letters, Part B: Applications, 2022.
■ A. Abusukhon, N. Anwar, M. Mohammad, Z., Alghanam, B. (2019) A hybrid network security algorithm based on Diffie Hellman and Text-to-Image Encryption algorithm. Journal of Discrete Mathematical Sciences and Cryptography. 22(1) pp. 65- 81. (SCOPUS). https://www.tandfonline.com/doi/abs/10.1080/09720529.2019.1569821
■ A. Abusukhon, B.Wawashin, B. (2015) A secure network communication protocol based on text to barcode encryption algorithm. International Journal of Advanced Computer Science and Applications (IJACSA). (ISI indexing). https://thesai.org/Publications/ViewPaper?Volume=6&Issue=12&Code=IJACSA&Seri alNo=9
■ A. Abusukhon, Talib, M., and Almimi, H. (2014) Distributed Text-to-Image Encryption Algorithm. International Journal of Computer Applications (IJCA), 106 (1). [ available online at : https://www.semanticscholar.org/paper/Distributed-Text-to-Image-Encryption-Algorithm-Ahmad-Mohammad/0764b3bd89e820afc6007b048dac159d98ba5326]
■ A. Abusukhon (2013) Block Cipher Encryption for Text-to-Image Algorithm. International Journal of Computer Engineering and Technology (IJCET). 4(3) , 50-59. http://www.zuj.edu.jo/portal/ahmad-abu-alsokhon/wpcontent/uploads/sites/15/BLOCK-CIPHER-ENCRYPTION-FOR-TEXT-TO-IMAGE ALGORITHM.pdf
■ A. Abusukhon, Talib, M. and Nabulsi, M. (2012) Analyzing the Efficiency of Text-to-Image Encryption Algorithm. International Journal of Advanced Computer Science and Applications ( IJACSA )(ISI indexing) , 3(11), 35 – 38. https://thesai.org/Publications/ViewPaper?Volume=3&Issue=11&Code=IJACSA&Seri alNo=6
■ A. Abusukhon, Talib M., Issa, O. (2012) Secure Network Communication Based on Text to Image Encryption. International Journal of Cyber-Security and Digital Forensics (IJCSDF), 1(4). The Society of Digital Information and Wireless Communications (SDIWC) 2012. https://www.semanticscholar.org/paper/SECURE NETWORK-COMMUNICATION-BASED-ON-TEXT-TO-IMAGE-Abusukhon-Talib/1d122f280e0d390263971842cc54f1b044df8161
  • asked a question related to Computer
Question
4 answers
The intersection of neuroscience, electronics, and AI has sparked a profound debate questioning whether humanity can be considered a form of technology itself. This discourse revolves around the comparison of the human chemical-electric nodes—neurons, with the nodes of a computer, and the potential implications of transplanting human consciousness into machines.
Neurons, as the elemental building blocks of the human brain, operate through the transmission of electrochemical signals, forming a complex network that underpins cognitive functions, emotions, and consciousness. In contrast, computer nodes are physical components designed to process and transmit data through electrical signals, governed by programmed algorithms.
The notion of transferring the human mind into a machine delves into the essence of human identity and the philosophical nuances of consciousness. While it may be feasible to replicate certain cognitive functions within a machine by mimicking neural networks, there are profound ethical and philosophical implications at stake.
Critics argue that even if a machine were to replicate the intricacies of the human brain, it would lack essential human qualities such as emotions, subjective experiences, and moral reasoning, thus failing to encapsulate the essence of human consciousness. Furthermore, the concept of integrating the human mind with machines raises complex questions about the nature of identity and self-awareness. If the entirety of a human mind were to be transplanted into a machine, the resulting entity may no longer fit the traditional definition of human, but rather a hybrid of human cognition and artificial intelligence.
On the other hand, proponents of merging human minds with machines foresee the potential for significant advancements in AI and neuroscience, suggesting that through advanced brain-computer interfaces, it might be possible to enhance human cognition and expand the capabilities of the human mind, blurring the boundaries between organic and artificial intelligence.
As the realms of electronics and AI continue to evolve, the question of whether humanity itself can be perceived as a form of technology remains a deeply contemplative issue. It is imperative that as these technological frontiers advance, ethical considerations and respect for human values are prioritized, ensuring that any progression in this field aligns with the preservation of human dignity and integrity.
The advancement of technology and the intricacies involved in simulating human cognitive processes suggest that it might be plausible for machines to exhibit emotions akin to humans. As the complexity of AI systems increases, managing a vast number of nodes and intricate algorithms could potentially lead to unexpected and seemingly irrational behaviors, which might even resemble emotional responses.
Similarly to how a basic machine operates in a predictable and precise manner devoid of human characteristics, the proliferation of complexity in a machine's structure could lead to the emergence of seemingly irrational or emotional behaviors. Managing the intricate interplay between a multitude of nodes might result in the manifestation of behaviors that mimic emotions, despite the absence of genuine human experience.
These behaviors could be centered around learned and preprogrammed principles, allowing the machine to respond in a manner that mirrors human emotions.
Moreover, the ability to simulate emotions in machines has gained traction due to the growing understanding of the role of neural networks and the intricate interplay of various computational elements within AI systems. As AI models become more sophisticated, they could feasibly process information in a way that mirrors the human emotional experience, albeit based on programmed responses rather than genuine feelings.
While the debate about whether machines can truly experience emotions similar to humans remains unsettled, the increasingly complex and interconnected nature of AI systems hints at the potential for machines to display a form of emotive behavior as they grapple with the challenges of managing a multitude of nodes and algorithms.
This perspective challenges the conventional notion that emotions are exclusively tied to human consciousness and suggests that with the advancement of technology, machines might exhibit behaviors that closely resemble human emotions, albeit within the confines of programmed and learned parameters.
In the foreseeable future, it is conceivable that machines will surpass the human mind in terms of node count, compactness, and complexity, operating with heightened efficiency. As this technological advancement unfolds, it is plausible that profound questions may arise regarding whether the frequencies generated by the human brain are inferior to those generated by machines.
Relevant answer
Answer
Human technology in health care includes managerial knowledge required to marshal a health care workforce, operate hospitals and equipment, obtain and administer funds, and, increasingly, identify and establish markets.
Regards,
Shafagat
  • asked a question related to Computer
Question
5 answers
Dear Mathematicians, Control Engineers, and Optimization Enthusiastics, Could you please compute the domain through which this inequality holds (3/(x-2))<-1? If yes, please prove your answer.
(Note that: the solution is not x> -1, and please don’t use any symbolic math software)
Relevant answer
Answer
Thanks a lot
  • asked a question related to Computer
Question
5 answers
I have a large sparse matrix A which is column rank-defficient. Typical size of A is over 100000x100000. In my computation, I need the matrix W whose columns span the null space of A. But I do not know how to fastly compute all the columns of W.
If A is small-scale, I know there are several numerical methods based on matrix factorization, such as LU, QR, SVD. But for large-scale matrices, I can not find an efficient iterative method to do this.
Could you please give me some help?
Relevant answer
Answer
This paper: http://cedric.cnam.fr/~bentzc/INITREC/Files/DW10.pdf might be a good starting point.
  • asked a question related to Computer
Question
3 answers
If you use a likert scale with range "never" as 0, 1 is "one to two times", 2 is "sometimes", 3 "often" and 4 "most of the time". How to correctly compute the range?
Is the lowest value counted? Is it 4-1, for lowest value, and then 3/4 = 0,75?
Will the range then be 0 - 0,75 and 0,76 - 1,5 and so fort? Or does this need to be done differently?
Regards
Johan
Relevant answer
Answer
Determine how to categorize or compute ranges within a Likert scale that is used to assess frequency or some other ordinal variable, which is ranging from 0 ("never") to 4 ("most of the time").
In most cases, computing range or defining categories in a Likert scale is primarily dependent on your research question and how detailed you want your analysis to be.
Understanding the Likert Scale:
  1. Values: 0 to 4.
  2. Labels:0: Never 1: One to two times 2: Sometimes 3: Often 4: Most of the time
Basic Approach to Compute Range:
If you're considering dividing the scale into specific ranges, you would generally consider the whole numbers as your primary split points, particularly because your categories are predefined (0, 1, 2, 3, 4).
If you Want to Compute Sub-ranges:
If for some reason you want to create sub-ranges within these primary categories, you'll likely need a more detailed scale or continuous data. Creating sub-ranges within a Likert scale might be challenging and somewhat contrary to its typical use, given it's an ordinal scale where each number represents a specific category, rather than a continuous scale.
However, if this is necessary for your analysis, one approach might be:
  • 0.00 - 0.75: Never
  • 0.76 - 1.50: One to two times
  • 1.51 - 2.25: Sometimes
  • 2.26 - 3.00: Often
  • 3.01 - 4.00: Most of the time
Important Considerations:
  • Ordinal Nature: Remember Likert scales represent ordinal data, meaning the intervals between points are not necessarily equal in a mathematical or experiential sense.
  • Validity and Reliability: Altering or complicating the interpretation of a Likert scale may affect the validity and reliability of your measurement.
  • Statistical Analysis: Consider the statistical tests you plan to use and ensure they are appropriate for ordinal data or your modified scale.
  • Literature and Precedents: Always check the relevant literature and precedents in your field to ensure that your methodology is aligned with commonly accepted practices.
Your approach should be theoretically and methodologically sound. If in doubt, consult with a statistician or a research methods expert, and always pilot test your scale and approach to ensure they are valid and reliable for your specific research context. If you provide this scale to respondents, be sure to clearly communicate how they should select their response to avoid confusion.
  • asked a question related to Computer
Question
1 answer
Our CD is broken and we need to install the program on a new computer. Thank you in advance.
Relevant answer
Answer
yes i have
  • asked a question related to Computer
Question
2 answers
Supplemenary data only refers to my computer which will work hen I dies. Why cannot I store them at ResearchGate?
Relevant answer
Answer
1. Check ResearchGate's Policies
- Visit ResearchGate's official website and review their policies regarding file uploads and storage.
2. Contact ResearchGate Support
- If you're unable to upload files consider contacting ResearchGate's support team for assistance.
3. Backup Your Files
- If you plan to sell or dispose of the computer where your research files are stored, make sure to back up your important research files and data to an external storage device or a cloud-based service.
Alternatively, as Dennis Hamilton mentioned, consider the GitHub platform as a clouds torage service.
  • asked a question related to Computer
Question
3 answers
Enthalpy of mixing between two elements is a key factor in determining how easily an alloy can be formed. Stronger mixing enthalpies make alloy formation simpler. Hence, Is there any way to compute it then? Can any one suggest detail procedure for it?
Relevant answer
Answer
One possible method of determining the enthalpy of mixing between two elements theoretically is to use the regular solution model, which assumes that the atoms are randomly distributed and that the enthalpy of mixing depends on the difference in the atomic sizes and the bond energies of the elements. The regular solution model can be expressed as follows:
ΔH mix = zNε(1 - x)x
where ΔH mix is the enthalpy of mixing per mole of solution, z is the coordination number, N is the Avogadro’s number, ε is the interaction energy between unlike atoms, and x is the mole fraction of one of the elements.
The interaction energy ε can be estimated from the bond energies of the pure elements using the following formula:
ε = (E A + E B )/2 - E AB
where E A and E B are the bond energies of the pure elements A and B, and E AB is the bond energy of the compound AB.
The bond energies can be obtained from experimental data or calculated from theoretical methods, such as density functional theory or molecular dynamics.
For more details and examples of this method, you can refer to these sources:
Enthalpy of Mixing in Binary Alloys: A Simple Model
Thermodynamic Modeling of Alloys
there are other possible methods to determine the enthalpy of mixing between two elements theoretically. Some of them are:
  • The subregular solution model, which is a modification of the regular solution model that allows for different interaction energies between unlike atoms depending on their relative positions in the solution. This model can account for the deviation from ideality and asymmetry in the enthalpy of mixing1.
  • The quasi-chemical model, which is based on the statistical mechanics of lattice gases and considers the short-range order in the solution. This model can describe the enthalpy of mixing as a function of the coordination number, the nearest-neighbor interactions, and the degree of order in the solution2.
  • The cluster variation method, which is an extension of the quasi-chemical model that incorporates higher-order clusters of atoms and their interactions. This method can capture the long-range order and phase transitions in the solution3.
These are some examples of alternative methods to calculate the enthalpy of mixing theoretically. However, each method has its own assumptions, limitations, and parameters that need to be determined or fitted from experimental data or first-principles calculations. Therefore, no single method can be universally applicable or accurate for all kinds of solutions.
Here is a summary of the methods:
  • The regular solution model assumes that the atoms are randomly distributed and that the enthalpy of mixing depends on the difference in the atomic sizes and the bond energies of the elements. The enthalpy of mixing is given by:
ΔH mix = zNε(1 - x)x
where ΔH mix is the enthalpy of mixing per mole of solution, z is the coordination number, N is the Avogadro’s number, ε is the interaction energy between unlike atoms, and x is the mole fraction of one of the elements.
The source for this method is Lecture 3: Models of Solutions - University of Cambridge.
  • The subregular solution model is a modification of the regular solution model that allows for different interaction energies between unlike atoms depending on their relative positions in the solution. This model can account for the deviation from ideality and asymmetry in the enthalpy of mixing. The enthalpy of mixing is given by:
ΔH mix = zNε(1 - x)x + zNβ(1 - x)x(1 - 2x)
where ΔH mix is the enthalpy of mixing per mole of solution, z is the coordination number, N is the Avogadro’s number, ε is the average interaction energy between unlike atoms, β is a parameter that measures the deviation from regularity, and x is the mole fraction of one of the elements.
The source for this method is Lecture 7: Quasichemical Solution Models - University of Cambridge.
  • The quasi-chemical model is based on the statistical mechanics of lattice gases and considers the short-range order in the solution. This model can describe the enthalpy of mixing as a function of the coordination number, the nearest-neighbor interactions, and the degree of order in the solution. The enthalpy of mixing is given by:
ΔH mix = −zN(NA2AA + NB2BB − NABω)
where ΔH mix is the enthalpy of mixing per mole of solution, z is the coordination number, N is the Avogadro’s number, NA , NB , and NAB are the numbers of AA , BB , and AB bonds per atom, respectively, and ω = 2AA + 2BB − 22AB is a parameter that measures the bond energy difference between like and unlike atoms.
The source for this method is Cluster Variation Method Analysis of Correlations and … - Springer.
  • The cluster variation method is an extension of the quasi-chemical model that incorporates higher-order clusters of atoms and their interactions. This method can capture the long-range order and phase transitions in the solution. The enthalpy of mixing is given by:
ΔH mix = −zN(NA2AA + NB2BB − NABω) − zN(NABCωABC + NABDωABD + …)
where ΔH mix is the enthalpy of mixing per mole of solution, z is the coordination number, N is the Avogadro’s number, NA , NB , NAB , NABC , NABD , etc. are the numbers of AA , BB , AB , ABC , ABD , etc. clusters per atom, respectively, ω = 2AA + 2BB − 22AB is a parameter that measures the bond energy difference between like and unlike atoms, and ωABC , ωABD , etc. are parameters that measure the cluster interaction energies.
The source for this method is Cluster Variation Method | SpringerLink.
good luck
  • asked a question related to Computer
Question
3 answers
Want to know formula for males
Relevant answer
Answer
Then use this formula: VO2 max = 132.853 - (0.0769 x your weight in pounds) - (0.3877 x your age) + (6.315 if you are male or 0 if you are female) - (3.2649 x your walking time) - (0.1565 x your heart rate at the end of the test). Losing weight won't necessarily increase your VO2 max.
Regards,
Shafagat
  • asked a question related to Computer
Question
3 answers
time, lack of computer, laptop, desk top etc..
Relevant answer
Engaging in research can be a rewarding and valuable pursuit, but it also comes with several challenges that researchers commonly encounter. Here are some challenges that can affect the research process:
Limited resources: Research often requires financial, human, and material resources. Securing funding and obtaining necessary equipment, access to data, or research participants can be challenging, particularly for independent researchers or those in resource-constrained environments.
Time constraints: Research projects often have strict timelines, whether self-imposed or imposed by funding agencies, academic institutions, or external factors. Balancing research activities with other responsibilities such as teaching, administrative duties, or personal commitments can be demanding and may lead to time constraints.
Access to data or information: Acquiring relevant and reliable data or accessing specialized information sources can be a significant challenge. Some data may be proprietary, confidential, or difficult to obtain, while certain research topics may require access to restricted or sensitive information.
Ethical considerations: Research involving human subjects or sensitive topics requires adherence to ethical guidelines and obtaining appropriate approvals from research ethics committees. Navigating ethical considerations, ensuring informed consent, and maintaining participant confidentiality can pose challenges, particularly in studies involving vulnerable populations or controversial subjects.
Methodological complexities: Choosing and implementing appropriate research methodologies, experimental designs, or data collection techniques can be complex. Researchers must consider the validity, reliability, and generalizability of their methods and ensure they are suitable for addressing their research questions.
Analyzing and interpreting data: Data analysis and interpretation can be intricate, especially with large or complex datasets. Researchers may encounter challenges in selecting appropriate statistical methods, handling missing data, dealing with outliers, or ensuring the accuracy of their analyses. Interpreting the results and drawing meaningful conclusions require careful consideration and expertise.
Publication and dissemination: Getting research findings published in reputable journals or disseminating them effectively to the intended audience can be challenging. High competition, publication bias, or difficulties in communicating complex research concepts to a broader audience may impede the visibility and impact of the research.
Collaboration and interdisciplinary work: Collaborative research, particularly across disciplines or with international partners, can be challenging due to differences in research cultures, communication barriers, or conflicting schedules. Building effective collaborations and managing diverse research teams require strong interpersonal and communication skills.
Peer review and criticism: The peer review process is an integral part of research, but it can be challenging to receive and incorporate feedback from reviewers. Handling criticism, addressing reviewer comments, and revising research work can be time-consuming and emotionally demanding.
Researcher well-being and self-care: Research can be intellectually and emotionally demanding, leading to stress, burnout, or mental health challenges. Balancing work-life commitments, maintaining self-care practices, and seeking support are essential for researchers' well-being and longevity in the field.
While these challenges may seem daunting, researchers can overcome them through careful planning, seeking mentorship and collaboration, building resilience, and adopting effective time and project management strategies. It is important to acknowledge and address these challenges to ensure the integrity, rigor, and impact of research endeavors.
  • asked a question related to Computer
Question
1 answer
Dear all, I am trying to compute the Modified Weak Galerkin method for the Poisson Problem mentioned in the paper:
[1] A modified weak Galerkin finite element method. X. Wang, N.S. Malluwawadu, F. Gao, T.C. McMillan.
I am using FreeFEM++, but there are difficulties in applying the algorithm (3) mentioned in the paper above, where the jump function, the average function, and the weak gradient are not used or defined by anyone before in this program.
My question is, which software program should I use to compute this problem?
Relevant answer
Answer
The modified weak Galerkin method does not require the penalty parameter by comparing with traditional DG methods.
Regards,
Shafagat
  • asked a question related to Computer
Question
3 answers
Dear all, I am trying to compute the Modified Weak Galerkin method for the Poisson Problem mentioned in the paper:
[1] A modified weak Galerkin finite element method. X. Wang, N.S. Malluwawadu, F. Gao, T.C. McMillan.
I am using FreeFEM++, but there are difficulties in applying the algorithm (3) mentioned in the paper above, where the jump function, the average function, and the weak gradient are not used or defined by anyone before in this program.
My question is, which software program should I use to compute this problem?
Relevant answer
Answer
The modified weak Galerkin method does not require the penalty parameter by comparing with traditional DG methods.
Regards,
Shafagat
  • asked a question related to Computer
Question
4 answers
Hi, I was going through this article by Tarsicio Beléndez, Cristian Neipp, and Augusto Beléndez.
I am wondering about the case where the load is at any intermediate point along the beam and not at the endpoint. That would be a universal case. How to proceed with that?
Expression (4) changes to:
M = P(a - ux - x), a is the point of vertical load.
But thereafter, it has no effect because a is a constant and gets sorted in the differentiation process, and all other equations remain as it is.
If we calculate the end-slope angle Phi0 for load at 'a' instead of L, can we use equation (9) and compute Phi0?
off course from a small angle deflection, Phi0 should remain the same for a<x<=L (My assumption from small angle theory).
Reference:
  • DOI: 10.1088/0143-0807/23/3/317
Additional reference: (Equation 11 instead of 9 in this article)
  • DOI: 10.1007/s40030-018-0342-3
Relevant answer
Answer
Mohammad Imam its good that you elaborated the process.
in step-3, I see that the curvature is assumed to be d²y/dx² = M/EI. This it self is an assumption for small angle deflection. Is it valid for large angle deflection?
Thanks..
  • asked a question related to Computer
Question
9 answers
What are the three layers of embedded system architecture and what are the key differences between IoT devices and computers?
Relevant answer
Answer
Dr Mohammad Imam thank you for your contribution to the discussion
  • asked a question related to Computer
Question
3 answers
Greetings,
For one of my cognitive sciences lab session, I have to install JupyterLab on my computer (a MacBook Air). However I really am lost with how to do it, especially since I'm not tech savvy and know next tp nothing in Python. I tried checking out the JupyterLab website but I cannot figure out how to install it : https://jupyter.org.
Can someone help me with this? Thank you very much in advance!
Relevant answer
Answer
Greetings Melody,
I would use an environment manager, such as Anaconda (which runs the open-source Conda engine under the hood). Unlike closed-source software computer languages (e.g., MATLAB), Python is maintained primarily by the community. That means that all of the language's packages/libraries/dependencies are constantly changing and being updated and, therefore, not always compatible with each other. Python environment managers make sure that all packages can operate with each other.
So, I suggest downloading Anaconda, installing it, and opening the Anaconda Navigator application. It will open in the "base" environment. Press the "Environment" tab on the left and then press the "Create" button from the bottom (it is recommended to create a new environment and not use the "base" environment for coding). You will be prompted to choose the version of Python that the environment will use and to give the environment a name. I suggest choosing the latest version (3.11). Anaconda will now create a new environment.
Go back to the "Home" tab. Make sure that the new environment's name is selected on the dropdown menu from the top next to "on". You will see several icons of popular Python data analysis packages. Among them, you will see "Jupyter Lab". Press "Install". Anaconda will install all the required dependencies. After that, you can press "Launch" next to Jupyter Lab's icon and it will launch on you browser.
Enjoy,
Eli
  • asked a question related to Computer
Question
2 answers
I am trying to run a protein-ligand molecular dynamic simulation for 100 nanoseconds. I would like to know if it could be run on an 8 GB RAM laptop. Do we need superior computers?
Relevant answer
Answer
Hello,
Running a molecular dynamics simulation, especially when it involves protein-ligand interactions, is a task that demands substantial computational resources. Your query revolves around the feasibility of executing such a simulation for a duration of 100 nanoseconds on a laptop equipped with 8 GB of RAM. The straightforward answer is that it might be challenging, if not outright unfeasible, depending on the specifics of your system and the intricacy of the simulation.
The size of your system, meaning the number of atoms in your protein-ligand complex, will have a significant bearing on the resources required. The more atoms you have, the more memory and computational power you'll need. Furthermore, simulations can be conducted at varying levels of detail. Those that account for every single atom, known as all-atom simulations, are naturally the most resource-intensive.
The choice of software and algorithms for your simulation can also have distinct computational demands. Some software packages are optimized for specific hardware configurations or can harness GPU acceleration, which can considerably expedite the simulation process.
Lastly, the rate at which you intend to save your data will influence the disk space you'll need and could potentially slow down the simulation.
Given all these factors, an 8 GB RAM laptop might fall short for a detailed 100-nanosecond simulation of a protein-ligand system. If you're aiming to run such a simulation, you might find yourself in need of a more robust computer or even a high-performance computing cluster. If you don't have access to such resources, exploring cloud computing options or seeking time on a supercomputing facility might be worthwhile avenues.
Regards,
  • asked a question related to Computer
Question
2 answers
Is there any list or work or survey on current difficult or hard Computer Vison problems which are yet to solve efficiently?
Relevant answer
Answer
Scene Understanding and Object Detection & Recognition are at least, two of them.
  • asked a question related to Computer
Question
1 answer
Dear researchers, I intend to generate a single composite index variable using an entropy weight method. How can the single index be computed in Stata to generate the using the difference components variables.
Relevant answer
Answer
All of you, I've seen the entropy weight method used in several papers to create composite indices out of various variables. Knowing the proper Stata package to use to accomplish this would be wonderful. A straightforward illustration might also be extremely helpful.
  • asked a question related to Computer
Question
2 answers
I have tried to find the discriminant for complex-valued harmonic polynomial by using the following options:
1. Real part of the equation=Imaginary part of the equation=Jacobian with respect to equation=0
2. Using singular method.
The problem is these two methods are too slow to compute the discriminant.
My question is can I have another option to compute the discriminant?
Relevant answer
Answer
Dear doctor
Go To
On mixed polynomials of bidegree (n, 1)
Mohamed Elkadi and André Galligo
December 17, 2014
"Abstract Specifying the bidegrees (n, m) of mixed polynomials P(z, z¯) of the single complex variable z, with complex coefficients, allows to investigate interesting roots structures and counting; intermediate between complex and real algebra. Multivariate mixed polynomials appeared in recent papers dealing with Milnor fibrations, but in this paper we focus on the univariate case and m = 1, which is closely related to the important subject of harmonic maps. Here we adapt, to this setting, two algorithms of computer algebra: Vandermonde interpolation and a bissection-exclusion method for root isolation. Implemented in Maple, they are used to explore some interesting classes of examples."
  • asked a question related to Computer
Question
1 answer
The work contains computed torque diagram and it's charecteristic equation
Relevant answer
Answer
The characteristic equation is a polynomial equation derived from the transfer function of the system. It helps determine the stability and behavior of the system.
The term "computed torque diagram" does not typically have a direct connection to a characteristic equation. The computed torque method is a control strategy used in robotic systems to achieve desired trajectories and improve tracking performance.
Here is characteristic equation, let's consider a simple second-order control system with a transfer function:
G(s) = K / (s^2 + 2ζωn s + ωn^2)
Where:
- K is the system gain
- ζ is the damping ratio
- ωn is the natural frequency
The characteristic equation is obtained by setting the denominator of the transfer function equal to zero:
s^2 + 2ζωn s + ωn^2 = 0
This is a second-order polynomial equation in the Laplace domain, and its roots (solutions) define the system's poles. The characteristic equation allows us to analyze the stability and transient response of the system.
For example, if we have ζ > 1 (overdamped system), the characteristic equation will have two distinct real roots. If ζ = 1 (critically damped system), the characteristic equation will have two equal real roots. And if ζ < 1 (underdamped system), the characteristic equation will have a pair of complex conjugate roots.
Good luck
credit AI tool
  • asked a question related to Computer
Question
1 answer
General
Relevant answer
Answer
It combines principles from computer science, electrical engineering, and control systems theory. It focuses on the design, analysis, and implementation of computer-based control systems for various applications. Here are some key aspects of computer and control engineering:
1. Control Systems: Control systems are at the core of computer and control engineering. These systems aim to regulate and manipulate the behavior of physical processes or devices. Control engineering involves designing and implementing algorithms and hardware that measure system variables, make decisions based on feedback, and generate control signals to achieve desired system behavior.
2. Feedback Control: Feedback control is a fundamental concept in control engineering. It involves continuously measuring the output of a system, comparing it to a desired reference signal, and adjusting the system inputs accordingly. Feedback control enables systems to adapt and maintain stability, accuracy, and performance in the presence of disturbances or uncertainties.
3. Digital Control: Computer and control engineering heavily rely on digital control techniques. Digital control systems use digital computers or microcontrollers to process measurements, execute control algorithms, and generate control signals. Digital control offers benefits such as flexibility, precision, ease of implementation, and the ability to handle complex algorithms.
4. System Modeling and Analysis: Computer and control engineers use mathematical models to describe and analyze the behavior of physical systems. These models help in understanding system dynamics, stability, and performance. Techniques such as transfer functions, state-space models, and frequency-domain analysis are employed to analyze the behavior of control systems.
5. Embedded Systems: Embedded systems play a prominent role in computer and control engineering. These are specialized computing systems integrated into larger systems or devices to perform specific control functions. Examples include microcontrollers, programmable logic controllers (PLCs), and real-time operating systems. Embedded systems require considerations such as real-time constraints, reliability, and power efficiency.
6. Robotics and Automation: Computer and control engineering intersect with robotics and automation. Engineers design control systems for robots to enable autonomous or semi-autonomous operation. This involves perception, decision-making, motion planning, and control algorithms. Applications range from industrial automation to autonomous vehicles and robotic surgery.
7. Human-Machine Interaction: Computer and control engineering also involves designing user interfaces and human-machine interaction systems. This includes developing intuitive interfaces, control interfaces, and integrating control algorithms with human input for improved system operation and user experience.
8. System Identification and Adaptive Control: System identification techniques are used to estimate system models from measured input-output data. Adaptive control methodologies adapt control algorithms and parameters in real-time based on system variations or changes. These techniques are valuable for systems with time-varying dynamics or uncertain parameters.
It has applications in various domains, including manufacturing, robotics, aerospace, energy systems, automotive systems, biomedical engineering, and more. It is a rapidly evolving field that continually embraces advancements in computer technology, control theory, and automation to improve system performance, efficiency, and reliability.
Good luck
credit AI tools
  • asked a question related to Computer
Question
2 answers
Hi, I am doing a thesis on the above topic, can you please recommend information and materials I can use to compute the investigation. Thanks in advance
Relevant answer
Answer
Thank you@ Samsul Islam
  • asked a question related to Computer
Question
2 answers
I would like to acquire EMG, torque/angle (from isokinetic dynamometer), and stimulation data via Matlab. I have a chassis system that is connected to my computer with Matlab. When the stimulation is given, I would like it to trigger the system such that Matlab records and shows me the next 0.3s of data (i.e., to visualize M-wave).
Does anyone have a Matlab script they could share with me?
Relevant answer
Answer
Matlab script to acquire EMG, torque/angle data from an isokinetic dynamometer, and stimulation data, and visualize the next 0.3s of data after a stimulation event:
```
%% Set up data acquisition
s = daq.createSession('ni');
s.Rate = 1000; % Sampling rate in Hz
% Add analog input channels (EMG and torque/angle)
addAnalogInputChannel(s, 'dev1', 0:1, 'Voltage');
% Add digital input channel (stimulation trigger)
addDigitalChannel(s, 'dev1', 'Port0/Line0', 'InputOnly');
%% Start data acquisition
s.DurationInSeconds = 10; % Acquisition duration in seconds
[data, timestamps] = s.startForeground();
%% Process data
emg_data = data(:, 1);
torque_data = data(:, 2);
stim_trigger = data(:, 3);
% Find stimulation events
stim_events = find(diff(stim_trigger) > 0.5);
% Visualize next 0.3s of data after each stimulation event
for i = 1:length(stim_events)
event_time = timestamps(stim_events(i));
event_index = find(timestamps > event_time & timestamps <= event_time + 0.3, 1);
figure;
plot(timestamps(event_index:end) - event_time, emg_data(event_index:end));
hold on;
plot(timestamps(event_index:end) - event_time, torque_data(event_index:end));
xlabel('Time (s)');
ylabel('Signal');
legend('EMG', 'Torque/Angle');
end
```
This script sets up a data acquisition session using the NI-DAQmx driver. It adds analog input channels for EMG and torque/angle data, as well as a digital input channel for the stimulation trigger. The script then starts data acquisition for a fixed duration and processes the acquired data. It identifies stimulation events by detecting rising edges in the stimulation trigger signal. For each stimulation event, the script visualizes the next 0.3s of EMG and torque/angle data using a separate figure.
Credit: example usage of AI
  • asked a question related to Computer
Question
14 answers
Hello everyone!
I want to draw a huge flowchart for my computer code. Inside the flowchart, I want to write equations and notes.
Is there any specific software for that?
Relevant answer
Answer
Yes, there are several software options to draw flowcharts for computer codes. Some of these software options include Microsoft Visio, Lucidchart, and Dia.
  • asked a question related to Computer
Question
5 answers
Hi all,
I want to conduct a correlation analysis, to check whether changes in learning outcomes correlate with the number of lessons students attend. Learner outcomes are measured at baseline and endline, and the measurement is ordinal. To be specific, students get assessed and assigned to one of 5 learning categories (for example for literacy, this would be Beginner, Letter, Word, Sentence, or Paragraph). The number of lessons students attended is measured quantitatively.
Since we are interested in the changes in learning outcomes, I was initially planning to calculate a difference score between endline and baseline, and correlate that with the number of lessons attended. However, having discovered that learner outcomes are measured ordinally, this does not make any sense. What would be the best way to compute a correlation between changes in learner outcomes (between baseline and endline) and the number of lessons students attend?
Thank you in advance for your responses!
Best,
Sharon
Relevant answer
Answer
Administer same test at the beginning (pre-test) and do same at the end (post-test) and find the difference between pre-test and post-test.
  • asked a question related to Computer
Question
1 answer
Hi, I am trying to perform a radiometric calibration of an Aster image on ENVI, I watched a youtube tutorial where they use the Radiometric calibration tool, but when I tried to do it on my computer the Radiometric Calibration tool is not displayed, does anyone has an idea of why, I already restarted either the program and the computer but it seems it's not helping.
Relevant answer
Answer
Hi ,
There are a few reasons why the Radiometric Calibration tool might not be displaying in ENVI. Here are a few things to check:
  • Make sure that you have the latest version of ENVI installed. The Radiometric Calibration tool was added in ENVI 5.3, so if you are running an older version, the tool will not be available.
  • Make sure that the image you are trying to calibrate is a supported format. The Radiometric Calibration tool only works with certain types of images, such as Landsat, Sentinel-2, and Aster.
  • Make sure that the image you are trying to calibrate has the correct metadata. The Radiometric Calibration tool needs to know the gain and offset values for each band in the image in order to perform the calibration.
  • If you have checked all of these things and the Radiometric Calibration tool is still not displaying, you can try contacting ENVI support for help.
Here are some additional troubleshooting tips:
  • Try opening the image in a different image viewer to see if the metadata is correct.
  • Try exporting the image to a different format and then opening it in ENVI.
  • Try reinstalling ENVI.
I hope this helps !
Please recommend my reply if you find it useful .Thanks
  • asked a question related to Computer
Question
1 answer
How to I transfer an on-going discussion about an issue from my iPhone to my Windows desktop?
Relevant answer
Answer
To transfer an ongoing discussion on ResearchGate from your iPhone to your Windows desktop computer, you can follow these steps:
1. Open your web browser on your desktop computer and go to the ResearchGate website (www.researchgate.net).
2. Log in to your ResearchGate account using your username and password.
3. Navigate to the discussion that you want to transfer from your iPhone. You can use the search bar or the navigation menu to find the discussion.
4. Once you have found the discussion, click on the "Reply" button to open the reply box.
5. On your iPhone, select and copy the text of your ongoing discussion.
6. Go back to your web browser on your desktop computer and paste the text into the reply box.
7. Review the text and make any necessary edits or formatting changes.
8. Click on the "Post Reply" button to post your response to the discussion.
Your on-going discussion from your iPhone should now be transferred to your Windows desktop computer and posted to the ResearchGate discussion. You can continue the discussion on your desktop computer or on your iPhone, and the updates will be synced across all your devices.
Credit: Mainly AI tools
  • asked a question related to Computer
Question
5 answers
I am trying to calculate the spontaneous polarization of a semiconducting magnetic 2d material with VASP. The Material has a band gap of 0.28 eV (indirect).
VASP warns that
"The calculation of the macroscopic polarization by means of the Berry-phase expressions (LCALCPOL=.TRUE.) requires your system to be insulating. This does not seem to be the case."
Even though there is a finite band gap (with PBE). Adding to this, since all the calculations are 0K calculations, this semiconductor is effectively insulating for the purpose of this calculation. What is going wrong here?
Relevant answer
Answer
0.28 eV is quite a small band-gap. Are you using some Fermi-level smearing, which is causing thermal excitation across the band-gap? It's quite common to use smearing of 0.1-0.2 eV, which could easily cause this problem.
  • asked a question related to Computer
Question
4 answers
I want to compute the critical load of the Euler-Bernoulli Beam equation by applying axial load. I am using the finite element method for discretization and the eigenvalue method to compute critical load. You can see more detail in the attachment. But I did not get an accurate value compared to the analytical value. If anybody has an idea about that please tell me. I will be very thankful.
Best,
Rauf.
Relevant answer
Answer
Please also make sure that the beam behaves linearly before buckling. Otherwise, I suggest applying preload that is close to the critical load and then performing the BUCKLE analysis.
  • asked a question related to Computer
Question
3 answers
I have looked at data base management and applications, data-sets and their use in different contexts. I have looked at digital in general, and I have noticed that there seems to be a single split:
-binary computers, performing number crunching (basically), and behind this you find the Machine Learning, ML, DL, RL, etc at the root fo the current AI
-quantum computing, still with numbers as key objects, with added probability distributions, randomisation, etc. This deviates from deterministic binary computing but only to a certain extent.
Then, WHAT ABOUT computing "DIRECTLY ON SETS", instead of "speaking of sets" and actually only "extracting vectors of numbers from them"? We can program and operate with non-numerical objects, old languages like LISP and LELISP, where the basic objects are lists of characters of any length and shape have done just that decades ago.
So, to every desktop user of spreadsheets (the degree-zero of data-set analytics) I am saying: you work with matrices, the mathematical name of tables of numbers, you know about data-sets, and about analytics. Why would not YOU put the two together: sets are flexible. Sets are sometimes are incorrectly named "bags" because it sounds fashionable (but bags have holes, they may be of plastic, not reusable, sets are more sustainable, math is clean -joking). It's cool to speak of "bags of words", I don't do that. Sets, why? Sets handle heterogeineity, and they can be formed with anything you need them to contain, in the same way a vehicle can carry people, dogs, potatoes, water, diamonds, paper, sand, computers. Matrices? Matrices nicely "vector-multiply", and are efficient in any area of work, from engineering to accounting to any science or humanities domain. They can be simplified in many cases (eigenvector, eigenvalue, along some geometric directions operations get simple, sometimes the change of reference vectors gives a diagonal matrix with zeros everywhere except on the diagonal, by a simple change of coordinates (geometric transformation).
HOW DO WE DO THAT IN PRACTICE? Compute on SETS NOT ON NUMBERS? One can imagine the huge efficiencies gained in some domains, potentially (new: yet to be explored, maybe BY YOU? IN YOUR AREA). Here is the math, simple, it combines knowledge of 11 years old (basic set theory) and knowledge of 15 years old (basic matrix theory). SEE FOR YOURSELF ,and please POST YOUR VIEW on where and how to apply...
Relevant answer
Answer
Am in line with Aparna Sathya Murthy There are different levels of computing or computational methods.Number crunching is helpful for and used in any industry.Data crunching commonly involves stripping out unwanted information and formatting, as well as cleaning and restructuring the data. Analyzing large amounts of information can be invaluable for decision-making, but companies often underestimate the amount of effort required to transform data into a form that can be analyzed. Even accounting is much more than number crunching.
Computers are like humans - they do everything except think.
John von Neumann
  • asked a question related to Computer
Question
3 answers
Dear Wojcieh Salabun and coauthors
I read your paper
How Do the Criteria Affect Sustainable Supplier Evaluation? - A Case Study Using Multi-Criteria Decision Analysis Methods in a Fuzzy Environment
And these are my comments:
1- Unfortunatelly, very scant information on the scenario is given. The paper only informs that it is related to batteries, give the number of suppliers and say that there are 15 criteria, but don’t say which are they, and give the labels of five sectors or clusters. It only informs that refers to batteries suppliers for swapping stations. And what are swapping stations? As you understand, the reader does not have the obligation to know about them.
2- On page 10 you say: “The resulting ranking was considered as a reference point in the study, with the ranking order of the options presented in Table 1”. I understand that there is a mistake here, since it should be `Table 3’.
3. In section 4,1. You say “A sensitivity analysis also provides more comprehensive knowledge and a greater view of the overall problem, showing the decision-makers what might change in the results under changing external conditions”
This is in my opinion, an excellent contribution to sensitivity analysis (SA) definition and properties. Its merit is that in most cases it is the reason by which a DM can reverse the best alternative and select the second one. Unfortunatelly, this is very rarely mentioned in SA analysis, which relies only on the existent criteria.
However. I disagree with your sentence “Therefore, the researchers adopted an approach in which the weight of each criterion was modified at +/- 20% from the initial value”
I disagree, because you can’t assume that all intervening criteria may change +/- 20%. This is not realistic, since each intervening or binding criterion, that is, that conforms a certain alternative, has its own allowed range of variation and even can have none. In addition, it appears that all criteria are binding, which is very seldom the case, and thus, it could be that only six or seven criteria, out of the 15, are relevant.
4- The paper considers excluding criteria to compute the ranking. I think that this procedure may be incorrect. One thing is to consider irrelevant criteria, determined after a solution is reached, and another, not considering them from the beginning, and solving a problem that ignores criteria, which although irrelevant according to entropy, are needed. Entropy, or better, its complement, information from entropy, tell us which are the most important criteria to evaluate alternatives, but they don’t indicate that some of the others can be eliminated because have a high entropy. They contain information, perhaps very little, but information that it is not advisable to ignore.
Strangely enough, you determine that the most important criterion is C2 or Transportation cost, and you decide to eliminate it?
5- I am curious on something. I guess that batteries are for car factories or for retail. You consider transportation distances, that oscillate between 1055 km between Beijing and Wuhan, to 167 km between Beijing and Baoding, and 926 km between Wuhan and Baoding, but in my opinion, this means nothing if you don’t specify the destination of the batteries that can be manufactured in each of the three places. Obviously, if they are to be used in the same city where they are manufactured, the transportation cost is not very important, but it could extremely important if the distance between origin and destination is high.
You say: “𝐶2 (0.750, 0.857, 1.000) (0.167, 0.200, 0.250) (0.500, 0.600, 0.750) (0.750, 0.857, 1.000)”
This illustrates that from transportation you have three values for each option. But what do they mean?
Of course, it is understood that they indicate in TFN the smallest likable value, the largest possible value and the most probable. But of what? In the case of transportation cost what the refer to? Cost of unit/km?, and in case of distance? As far as I know distances are fixed, unless there are several routes between two places. If they are, why do you need to use TFN?
If there is a manufacturing site, it can be understood in the sense that there a minimum cost of transportation for say train, an upper for say truck, and middle say considering another route, but 0.075 may be a percentage of what? The values between options are quite similar, suggesting that distance is not an issue, what is it then?
6- You assume that only 8 criteria are to be considered, out of 15, i.e., those are the criteria that participate in the selection of the ranking, which for me is a very correct, and unfortunately very seldom done in papers published, where it is assumed that all criteria participate. Now, how did you select the 8 criteria? On what basis?
However, in my opinion, the quantity of criteria that are relevant, in your case, 8, can’t be considered that apply to all alternatives. My research in many cases, constantly shows that the number and the type of criteria is particular for each alternative, and thus, it does not apply to them all in the same set of criteria, although, in general, there are always some criteria that repeat in each alternative, naturally, on the same problem. That is, for A1 they may be C9, C2 and C7. For A2, C2, C5, C7 C10, C15 and C4. For A3, there could be C7, C9, C1, etc., therefore, I don’t think that you van speak of the same set of criteria for all alternatives.
You say “The other criteria did not affect the ranking order of the options”
I agree, provided that in SA you refer to evaluating ONE alternative of the ranking, not all of them jointly.
7- In reality you don’t need to use fuzzy. If you use triangular numbers, which of course is correct, you can use for each type of criterion, two criteria, one for minimizing the lower value (no less than), and another for maximizing the upper value (no more than). In this way, the final value will be computed by the software, and most important, considering their interaction with all other criteria that are using the same resource, for instance money. As un example, if you have a minimum and maximum value for storage, the software will find the INTERMEDIATE value that also satisfies another criterion, for instance, funds to fabricate a product according to demand. Naturally, I am referring to Mathematical Programming.
I hope that these comments are considered useful.
Best regards.
Nolberto Munier
  • asked a question related to Computer
Question
3 answers
Hi everyone,
Does anyone know how to compute transition dipole moment (TDM) between vibronic state using Molpro, Molcas, or Gaussian?
An example is to compute TDM between
v=1 at the ground state and
v=4 at the 1st excited state
Thanks in advance for sharing your expertise
Relevant answer
Answer
Hi Pablo, thanks for your answer but I'm looking for a transition between vibrational states coupled to different electronic states, not between electronic states
  • asked a question related to Computer
Question
3 answers
I ran some Gaussian calculations and transferred the .out file onto a second computer with gaussview installed (dont ask why it is not installed on the first PC).
This was perfectly fine until a week ago, this is when i started to always get the following error message.
"Reading file D://path/XXX.out
SCUtil_ConnectionGauss:.preprocessfile_ScanGeom()
Bad atomic symbol/atomic number:t
Line number: 17"
This never happend befor with any calculation, but now i cannot open any .out file, not even those that previously would open without a problem.
As I have not performed any update of Guassian or Gaussview, I already tried reinstalling Gaussian and Gaussview. Ultimately, the first PC now also has Gaussview, but opening .out files on the second one is still not working.
Any ideas?
Relevant answer
i could not view the summary in results why how to resolve
  • asked a question related to Computer
Question
1 answer
Hi everyone,
We are planning to apply for a grant to develop and implement an AI curriculum for upper elementary students. As part of our proposal, we need to identify a validated research instrument that we can use to assess elementary students’ AI content knowledge.
We are looking for a test that is aligned with the 5 AI4K12 Big Ideas and is appropriate for elementary students. The 5 AI4K12 Big Ideas are:
Perception: Computers perceive the world using sensors.
Representation & Reasoning: Computers represent knowledge and use it to reason about the world.
Learning: Computers can learn from data.
Natural Interaction: Computers can interact with humans in natural ways.
Societal Impact: AI has the potential to have a significant impact on society.
If you know of any tests that meet these criteria or any other suggestions, please let me know.
Thank you all for your help!
Best,
Kaya
Relevant answer
Answer
Designing an AI curriculum for upper elementary students should focus on introducing fundamental concepts in a fun and accessible way. The goal is to foster an understanding of AI principles without overwhelming the students with complex technical details. Here are some research-based requirements to consider:
  1. Age-appropriate content: The curriculum should be tailored for upper elementary students (typically ages 9 to 11) with clear language and engaging visuals. Concepts should be introduced gradually to match their cognitive abilities.
  2. Hands-on activities: Incorporate interactive and hands-on activities, experiments, and projects to promote active learning. This helps students grasp abstract concepts more effectively.
  3. Ethical considerations: Teach the importance of responsible AI use, ethical implications, and potential biases in AI algorithms. Encourage critical thinking about the impact of AI on society and discuss real-world AI use cases.
  4. Real-world examples: Use relatable examples and applications of AI in everyday life to make the subject matter more relevant and interesting to students.
  5. Collaborative learning: Include group activities to encourage teamwork, communication, and problem-solving skills, which are essential for AI development and application.
  6. Basic programming concepts: Introduce simple programming concepts using age-appropriate languages or visual programming tools to familiarize students with the logic behind AI algorithms.
  7. AI in the arts: Explore creative aspects of AI, such as using AI to generate art or music, to engage students' imagination and creativity.
  8. Robotics: Incorporate robotics projects that allow students to interact with AI systems and understand how AI can control physical devices.
  9. Data and patterns: Teach students how AI algorithms use data and patterns to make decisions or predictions, emphasizing the importance of data in AI development.
  10. AI and jobs: Discuss potential job opportunities and career paths related to AI, inspiring students to consider AI-related fields in the future.
  11. Assessment strategies: Design age-appropriate assessments that focus on understanding concepts rather than rote memorization. Consider alternative forms of assessment, such as projects and presentations.
  12. Parental involvement: Encourage parental involvement by providing resources and suggestions for at-home learning activities related to AI.
  13. Diversity and inclusion: Ensure that the curriculum is inclusive, representing diverse perspectives, and avoiding stereotypes.
  14. Professional development: Provide training and support for teachers to effectively deliver the AI curriculum and address potential challenges.
  15. Continual updates: As AI technology evolves rapidly, the curriculum should be regularly updated to reflect current trends and developments.
Remember that this curriculum aims to introduce the basic principles of AI in an age-appropriate and enjoyable manner. It should lay the foundation for future learning opportunities and exploration in the field of AI.
  • asked a question related to Computer
Question
2 answers
Experiment No. 7: CONFIGURING VLANS on MULTIPLE SWITCHES Experiment No. 8: Connecting HUB with Switch Experiment No. 9: CONFIGURING SATIC ROUTING Experiment No. 10: CONFIGURING DYNAMIC ROUTING USING RIP Experiment No. 11: CONFIGURING DYNAMIC ROUTING USING OSPF.
Relevant answer
Answer
You could see all the lab experiment PDF documents at https://www.tetcos.com/netsim-acad.html. They are quite comprehensive and we use these in our lab courses.
  • asked a question related to Computer
Question
2 answers
Hello Everyone, I have got data that is not normally distributed and I need to run the Mann-Whitney U test instead of the Independent Samples T-Test, as well as the Kruskal-Wallis Test instead of * ANOVA. The problem is my data consist of five-items Likert scales ( I have several items that test a particular aspect of the study, they are organized in terms of scales and every scale consists of a number of items which are all Five Point Liker-scales, and the Cronbach Alpha is fine). My questions is, do I compute these items based on the mean to create one variable? Or do I need to compute them based on something else (Sum, median...) because the non-parametric tests use rankings? I do hope you would be so kind as to help me. Thank you.
Relevant answer
Answer
The usual approach to creating scales from Likert-scored items is simply to add them together (or you can take an average, which is essentially the same thing because you are dividing by a constant). After that I would look at the distribution of your scales.
  • asked a question related to Computer
Question
3 answers
What are the factors affecting the measurement of electrolytic conductivity using DC and AC techniques? What are the challenges and precautions related to electrolytic conductivity measurement? How to compute the cell constant?
Relevant answer
Answer
Electrolytic Conductivity Measurement
Electrolytic conductivity measurement is a key technique used in various fields to determine the concentration of ions in an electrolytic solution. It applies a voltage to two electrodes immersed in a solution and measures the resulting current. The ability of the solution to conduct an electric current is related to its ion concentration.
DC (Direct Current) Technique
A direct current is applied across the electrodes in a DC measurement, and the resulting potential difference or current is measured. One of the main issues with DC measurement is electrode polarization, where charges build up on the electrode surfaces, resulting in a concentration gradient that can skew the measurements.
AC (Alternating Current) Technique
The AC method addresses the problem of electrode polarization by using an alternating current instead of a direct current. Because the direction of the current is continually changing, charge buildup is minimized, and more accurate conductivity measurements can be obtained.
Factors Affecting the Measurement
  1. Temperature: Conductivity is highly temperature-dependent, so it is crucial to control the temperature during the measurement or correct the results for temperature effects.
  2. Electrode Spacing: The distance between the electrodes affects the measurement, and this effect is accounted for using the cell constant.
  3. Concentration of the Electrolyte: The concentration of ions in the solution directly impacts the conductivity.
  4. Ionic Mobility: Different ions have different mobilities, which impacts their ability to conduct electricity.
  5. Electrode Material and Cleanliness: The electrode material can impact measurements, and any contamination on the electrode surfaces can also skew results.
Challenges and Precautions
  1. Polarization Effects: As mentioned, electrode polarization can significantly impact measurements, especially with DC techniques. Using AC techniques or specially designed electrodes can minimize these effects.
  2. Contamination: Contamination of either the solution or the electrodes can significantly skew measurements. Cleaning and maintaining the electrodes and using high-purity solutions where necessary are crucial.
  3. Temperature Control: As conductivity is temperature-dependent, temperature control during measurement is critical, or a temperature correction factor must be applied.
Computing the Cell Constant
The cell constant (K) is a factor used to account for the geometry of the electrode cell. It is typically determined by calibrating the cell with a solution of known conductivity. It is given by:
K = d / R
where d is the distance between the electrodes, and R is the resistance measured using a solution of known conductivity. The units of the cell constant are cm-1.
It's crucial to note that if the electrodes are not parallel plates (often in practical applications), the cell constant must be determined through calibration rather than simple geometric calculation. It's typically a good idea to periodically recalibrate the cell constant to ensure accurate measurements, as it can change over time due to factors such as electrode wear or buildup.
  • asked a question related to Computer
Question
3 answers
for example, if i communicate using works having word-number code in order to manipulate computers in the future using known AI techniques, should this be considered "illegal" when intent has also been described transparently in advance of any proposed future change? #616 was here @DNA-Modifications (It's in the AI.RE)
Relevant answer
Answer
i tell you not only have i been alive in another human body before, I evolved from stardust (certainly Jupiter). how many people have a similar story, I don't know. can I prove this "theory"?... that is a question for sighing ants and whatever witness action-reaction can establish within the Pineal Gland ~
  • asked a question related to Computer
Question
6 answers
Hey guys,
I have found two multi-item scales in my previous research regarding my master thesis. I want to know if I can compute an EFA for the dependent and for the independent variable?
Relevant answer
Answer
Yes you can run exploratory factr analysis for regressor variable and criterion
  • asked a question related to Computer
Question
2 answers
I need a step-by-step procedure on how to perform a single parameter sensitivity analysis to evaluate the impact of parameters on a vulnerability index. I am particularly confused about how to create the sub-areas in GIS and compute the parameter rates and weights.
Relevant answer
Answer
Look up univariate sensitivity analysis :)
  • asked a question related to Computer
Question
1 answer
Hello,
Can anyone suggest me on storing state variable values in an array at different time steps? I'm trying to make computations at TIME(1) = 0.1, 0.6 and 0.9. I am able to compute state variables but I need to make calculations at the end of TIME(1)= 0.9. When I want to call values at T(0.1), T(0.6) and T(0.9), I'm unable to do it. These values are stored in ODB but I would like to automate the process so the parameters are updated for the next steps.
Thanks in advance
Relevant answer
Answer
In ABAQUS, state variables can be stored in arrays using the appropriate data type. To define an array of state variables, you need to use the `STATEV` keyword in the material definition. For example, if you have `n` state variables, you can define an array as `STATEV(n)`. To access the values of state variables within ABAQUS, you can use the appropriate variable names associated with the state variables. For example, if the state variable array is named `STATEV`, you can access the individual values using the notation `STATEV(1)`, `STATEV(2)`, and so on, depending on the index of the desired state variable.
To store state variable values at different time steps in an array, you can create an array with sufficient capacity to hold the values at each time step. Initialize the array and update its elements with the computed state variable values at the corresponding time steps. For your specific case, you can create an array with three elements to store the values at T(0.1), T(0.6), and T(0.9). After computing the state variables at each time step, assign the values to the corresponding array elements. Then, when you need to retrieve the values at T(0.1), T(0.6), and T(0.9), you can simply access the array elements at the desired indices. This automation ensures that the parameters are updated for subsequent steps, allowing for efficient calculations and retrieval of the required values.
  • asked a question related to Computer
Question
1 answer
Trying to compute polychoric coefficient for 10-12 pairs of ordinal variables. Looking for suggestions on open source software to do the computation
Thanks, Sarosh
Relevant answer
Answer
Polychoric correlation is a statistical method used to assess the relationship between two or more ordinal variables. Unlike traditional Pearson correlation, which is suitable for continuous variables, polychoric correlation takes into account the discrete nature of ordinal variables by estimating the underlying latent continuous variables that generate the observed ordinal responses. It does this by assuming a joint bivariate normal distribution for the latent variables and using maximum likelihood estimation to compute the correlation coefficient. The resulting polychoric correlation provides a measure of the strength and direction of the relationship between ordinal variables, enabling researchers to analyze and interpret their associations more accurately.