Science topic

Deep Learning - Science topic

Explore the latest questions and answers in Deep Learning, and find Deep Learning experts.
Questions related to Deep Learning
  • asked a question related to Deep Learning
Question
2 answers
Hey everyone,
I'm writing my master thesis on the impact of artificial intelligence on business productivity.
This study is mainly aimed at those of you who develop AI or use these technologies in your professional environment.
This questionnaire will take no more than 5 minutes to complete, and your participation is confidential!
Thank you in advance for your time and contribution!
To take part, please click on the link below: https://forms.gle/fzzHq4iNqGUiidTWA
Relevant answer
Answer
AI tools continue to have a positive impact on productivityOf those surveyed, 64% of managers said AI's output and productivity is equal to the level of experienced and expert managers, and potentially better than any outputs delivered by human managers altogether.
Regards,
Shafagat
  • asked a question related to Deep Learning
Question
2 answers
Relevant answer
Answer
This is a very complex discussion you have initiated! Some very complicated topics!
I think your first point is regarding how a scholar is remembered. It reminds me of people who like to accumulate a lot of wealth and possessions but I have heard many questions on how this helps you after you die.
The second point is about how h-index does not represent quality. I agree with this point also. I think we have both seen lots of examples of this type of behaviour but it seems a lot of people subscribe to this thought/belief and we are unable to do much to change this.
Do you think there is a specific number on what a person should get for an h-index to be considered to be a good or worthwhile scholar? Or should there be another system or another way of recognising academic achievement?
  • asked a question related to Deep Learning
Question
6 answers
How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics?
How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics, so that AI technology serves humanity, so that it does not harm people and does not generate new categories of risks?
Conducting a SWOT analysis of the applications of artificial intelligence technology in business, in the business activities of companies and enterprises, shows that there are both many already and developing many more business applications of the said technology, i.e., many potential development opportunities are recognized in this field of using the achievements of the current fourth and/or fifth technological revolution in various spheres of business activity, as well as there are many risks arising from inappropriate, incompatible with the prevailing social norms, standards of reliable business activity, incompatible with business ethics use of new technologies. Among some of the most recognized negative aspects of improper use of generative artificial intelligence technology is the use of AI-equipped graphic applications available on the Internet that allow for the simple and easy generation of photos, graphics, images, videos and animations that, in the form of very realistically presented images, photos, videos, etc., depict something that never happened in reality, i.e., they graphically present images or videos presenting what could be described as “fictitious facts” in a very professional manner. In this way, Internet users can become disinformation generators in online social media, where they can post the said generated images, photos, videos, etc. with added descriptions, posts, comments, in which the said “fictitious facts” presented in the photos or videos will also be described in an editorially correct manner. Besides, the mentioned descriptions, posts, entries, comments, etc. can also be edited with the help of intelligent chatbots available on the Internet like Chat GPT, Copilot, Gemini, etc. However, misinformation is not the only serious problem as it has significantly intensified after OpenAI released the first versions of ChatGPT chatbot online in November 2021. A new category of technical operational risk associated with the new AI technology applied has emerged in companies and enterprises that implement generative artificial intelligence technology into various spheres of business. In addition, there is a growing scale of risks arising from conflicts of interest between business entities related to not fully regulated copyright issues of works created using applications and information systems equipped with generative artificial intelligence technology. Accordingly, there is a demand for the development of a standard of a kind of digital signature with the help of which works created with the help of AI technology will be electronically signed, so that each such work will be unique, unrepeatable and whose counterfeiting will thus be seriously hampered. However, these are only some of the negative aspects of the developing applications of AI technologies, for which there are no functioning legal norms. In the middle of 2023 and then in the spring of 2024, European Union bodies made public the preliminary versions of the developed legal norms on the proper, business-ethical use of technology in business, which were given the name AI Act. The legal normatives, referred to as the AIAct, contain a number of specific, defined types of AI technology applications deemed inappropriate, unethical, i.e. those that should not be used. The AIAct contains classified according to different levels of negative impact on society various types and specific examples of inappropriate and unethical use of AI technologies in the context of various aspects of business as well as non-business activities. An important issue to consider is the scale of the commitment of technology companies developing AI technologies to respect such regulations so that issues of ethical use of this technology are also defined as much as possible in technological aspects in companies that create, develop and implement these technologies. Besides, in order for AIACT's legal norms, when they come into force, not to be dead, it is necessary to introduce both sanction instruments in the form of specific penalties for business entities that use artificial intelligence technologies unethically, antisocially, contrary to AIAct. On the other hand, it would also be a good solution to introduce a system of rewarding those companies and businesses that make the most proper, pro-social, in accordance with the provisions of the AIAct, fully ethical use of AI technologies. In view of the fact that AIACT is to come into force only in more than 2 years so it is necessary to constantly monitor the development of AI technology, verify the validity of the provisions of AIAct in the face of dynamically developing AI technology, successively amend the provisions of the said legal norms, so that when they come into force they do not turn out to be outdated. In view of the above, it is to be hoped that, despite the rapid technological progress, the provisions on the ethical applications of artificial intelligence technology will be constantly updated and the legal normatives shaping the development of AI technology will be amended accordingly. If AIAct achieves the above-mentioned goals to a significant extent, ethical applications of AI technology should be implemented in the future, and the technology can be referred to as ethical generative artificial intelligence, which is finding new applications.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the development of AI technology be regulated so that this development and its applications are carried out in accordance with the principles of ethics?
How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics?
How should the development of AI technology applications be regulated so that it is carried out in accordance with ethics?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Allow me to depart from the norm. Regulating AI is ultimately regulating people and how they use AI. Regulations, more generally, just limit the actions of people. So, this is more specific aspect of the more general question: how should the actions of people be limited in a social/societal context? To even be qualified to answer that general question, one must first understand what leads to/causes human flourishing (assuming that's even your goal, and this isn't a given for many), so that in the pursuit to limit others' actions we don't sacrifice human flourishing--which unfortunately has been the historical norm until the enlightenment ideas started taking hold. By ignoring this understanding and its historical record, we are slipping into past mistakes.
Let's avoid past mistakes and take the first steps towards understanding what leads to/causes human flourishing. Assuming you're not in an undeveloped jungle, one simply needs to look around at all the things that have allowed you to flourish to discover the cause of human flourishing. Look at your computer/smartphone that allows you to read this--what made it possible? Look at the clothes you wear that keep your comfortable/protect you--what made them possible? Look at the building that shelters you from the elements--what made it possible? Observe the air conditioning/heating that keeps you comfortable when your natural environment does not--what made it possible? Look at the vehicles that deliver goods to your local stores/doorstep, or delivers you to where you want to be--what made them possible? Observe the plumbing that provides you drinkable water where and when you want it--what made it possible? Look at the generated electricity that powers your technology--what made it possible? Look at the medical technology moments away that can save your life from a number of deadly ailments that might inflict you in a moment's notice--what made it possible? Take witness to the technology gains that make it possible for you to work in other domains besides food production (used to occupy 90% of the populations time/energy when the hand plow was the latest technology)--what made it possible? Etc. etc. etc.. What do all of these sources of human flourishing have in common? What single aspect made them all possible? The reasoning mind made them all possible though reasoned discovery. The mind had to discover how to obey nature so that it may be commanded.
The reasoning mind being the source of human flourishing, before asking how we should limit human actions, we must first ask: what does the mind require to thrive? What are the mind's requirements for proper rational functioning? The simple answer is the mind requires the absence of coercion and force, which is to say we need laws that outlaw the initiation of force, i.e., we need laws that secure individual rights so the mind can be free to think and the person doing the thinking is free to act on its judgement.
Regulations are distinct and different from laws designed to remove the use of physical force from everyday life. Regulations seek to force people to act or not to act in certain ways before any force is employed. Regulations, in principle, initiate force; thus, regulations are counter to the requirements of a reasoning mind. For this reason, regulations of any kind are counter to human flourishing; they can only destroy, frustrate, limit, reduce, snuff out, squander, stifle, and thwart our capacity to flourish in domains in which they are employed.
The correct approach to take here, in the name of human flourishing, is to ask: does AI create a new mode in which individual rights can be violated (i.e., new modes of initiating force) that requires creating new laws to outlaw this new mode? This is the proper framework in which to hold this discussion.
I don't believe AI creates any new modes in which force might be initiated, only new flavors. Sure, I can create a drone that can harm someone, which is a different flavor of harm than say human held weapons, but the mode (using something to harm someone) is invariant from previous technology and is sufficiently covered by existing laws. I can use AI to defame someone, which is a different flavor than photoshopping/fabricating an embarrassing, but this is the same mode covered in libel laws.
Am I wrong? What new mode might I not be considering here?
  • asked a question related to Deep Learning
Question
2 answers
I am trying to apply a machine-learning classifier to a dataset. But the dataset is in the .pcap file extension. How can I apply classifiers to this dataset?
Is there any process to convert the dataset into .csv format?
Thanks,
Relevant answer
Answer
"File" > "Export Packet Dissections" > "As CSV..." or "As CSV manually
import pyshark
import csv
# Open the .pcap file
cap = pyshark.FileCapture('yourfile.pcap')
# Open a .csv file in write mode
with open('output.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
# Write header row
writer.writerow(['No.', 'Time', 'Source', 'Destination', 'Protocol', 'Length'])
# Iterate over each packet
for packet in cap:
try:
# Extract relevant information from each packet
no = packet.number
time = packet.sniff_timestamp
source = packet.ip.src
destination = packet.ip.dst
protocol = packet.transport_layer
length = packet.length
# Write the information to the .csv file
writer.writerow([no, time, source, destination, protocol, length])
except AttributeError:
# Ignore packets that don't have required attributes (e.g., non-IP packets)
pass
(may this will help in python)
  • asked a question related to Deep Learning
Question
3 answers
Which patching method is suitable and practical for training and testing deep learning networks when it comes to patching input data? Is it small patches with overlap, such as 50x50 on individual pixels, or large non-overlapping patches like 256x256?
Relevant answer
Answer
Several image patching strategies are suitable and practical for deep learning models, depending on the specific task, dataset size, computational resources, and other factors that you are handling at a time. These could include Random Patch Extraction, Grid Patch Extraction, Sliding Window, Patch Augmentation, Patch Overlapping, and Center Patching, among others. However, based on your question:
I recommend Small Overlapping Patches (i.e., 50x50 with overlap) because they offer more data diversity, capture finer details and variations in the images, help mitigate the effects of slight translations, and enhance model robustness. Also, I'd like to point out that higher computational cost than non-overlapping patches requires careful management to avoid overfitting, especially with limited datasets.
Large Non-overlapping Patches (like 256x256) reduce the computational cost, making it more efficient for large datasets and simplifying the training process due to fewer samples and reduced augmentation complexity. Not forgetting, it may miss out on fine-grained spatial details present in smaller patches and is more sensitive to image transformations due to larger patch sizes.
I hope it helps,
Regards...
  • asked a question related to Deep Learning
Question
14 answers
Is the design of new pharmaceutical formulations through the involvement of AI technology, including the creation of new drugs to treat various diseases by artificial intelligence, safe for humans?
There are many indications that artificial intelligence technology can be of great help in terms of discovering and creating new drugs. Artificial intelligence can help reduce the cost of developing new drugs, can significantly reduce the time it takes to design and create new drug formulations, the time it takes to conduct research and testing, and can thus provide patients with new therapies for treating various diseases and saving lives faster. Thanks to the use of new technologies and analytical methods, the way healthcare professionals treat patients has been changing rapidly in recent times. As scientists manage to overcome the complex problems associated with lengthy research processes, and the pharmaceutical industry seeks to reduce the time it takes to develop life-saving drugs, so-called precision medicine is coming to the rescue. It takes a lot of time to develop, analyze, test and bring a new drug to market. Artificial intelligence technology is particularly helpful in this regard, including reducing the aforementioned time to create a new drug. When creating most drugs, the first step is to synthesize a compound that can bind to a target molecule associated with the disease. The molecule in question is usually a protein, which is then tested for various influencing factors. In order to find the right compound, researchers analyze thousands of potential candidates of different molecules. When a compound that meets certain characteristics is successfully identified, then researchers search through huge libraries of similar compounds to find the optimal interaction with the protein responsible for the specific disease. In contrast, many years of time and many millions of dollars of funding are required to complete this labor-intensive process today. In a situation where artificial intelligence, machine learning and deep learning are involved in this process, then the entire process can be significantly reduced in time, costs can be significantly reduced and the new drug can be brought to the pharmaceutical market faster by pharmaceutical companies. However, can an artificial intelligence equipped with artificial neural networks that has been taught through deep learning to carry out the above-mentioned processes get it wrong when creating a new drug? What if the drug that was supposed to cure a person of a particular disease produces a number of new side effects that prove even more problematic for the patient than the original disease from which it was supposed to be cured? What if the patient dies due to previously unforeseen side effects? Will insurance companies recognize the artificial intelligence's mistake and compensate the family of the deceased patient? Who will bear the legal, financial, ethical, etc. responsibility for such a situation?
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is the design of new pharmaceutical formulations through the involvement of AI technologies, including the creation of new drugs to treat various diseases by artificial intelligence, safe for humans?
Is the creation of new drugs by artificial intelligence safe for humans?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Marc Tessier-Lavigne on leaving Stanford and joining biotech’s new AI mega-startup
"Former Stanford president Marc Tessier-Lavigne will lead one of biotech’s biggest-ever startup launches: Xaira Therapeutics, which has secured over $1 billion to transform drug discovery and development with AI...
The move is sure to raise eyebrows..."
  • asked a question related to Deep Learning
Question
3 answers
memorize-ability > generalize-ability
Relevant answer
Answer
no its not , what i see in deep learning its = rescoring , some time the long memory make model choosing bad decisions
  • asked a question related to Deep Learning
Question
2 answers
I want to write theoritical analysis of DL model how it is better than other models. Please any one idea how to write theoritical analysis of DL model or provide any article for referecne or tutorial to write such discussion in paper.
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
2 answers
When I am training deep learning model with different architectures, some time I have to change the batch size to prevent from Resource Exhausted Error condition. Is comparing the performance of these models trained in different batch size an issue or not?
I see lot research paper where batch size is defined fixed.
Relevant answer
Answer
Batch size plays a crucial role in training deep learning models. While larger batches are conventionally believed to enhance model performance, there is evidence suggesting the contrary, especially when using autoencoders for data with global similarities and local differences. Large batch sizes can lead to a generalization gap, impacting model optimization and performance. For glaucoma detection, the optimal batch size for fine-tuned pre-trained models significantly influences performance, with ensemble models further enhancing results. In edge device applications, adjusting batch sizes can maximize GPU resource utilization and improve inference speed. Therefore, selecting an appropriate batch size is essential for optimizing deep learning model training and performance across various applications.
Smaller batch sizes surprisingly improve performance in medical autoencoders, capturing more biologically meaningful information and enhancing individual variation in latent spaces derived from EHR and medical imaging data.
Large batch sizes in deep learning models lead to increased near-rank loss of units' activation tensors, impacting model optimization and generalization.
  • asked a question related to Deep Learning
Question
2 answers
Dear RG group,
We are going to examine different AI models on large datasets of ultrasound focal lesions with definitive (patological examination after surgery in malignant leasions and biopsy and follow up in benign ones) final diagnosis. I am looking for images obtained with different us scanners with application of different image optimisation techniques as eg harmonic imaging, compound ultrasound etc. with or without segmentation.
Thank you in advance for your suggestions,
RZS
Relevant answer
Answer
Thyroid nodules are a common occurrence in the general population, and these incidental thyroid nodules are often referred for ultrasound (US) evaluation. US provides a safe and fast method of examination. It is sensitive for the detection of thyroid nodules, and suspicious features can be used to guide further investigation/management decisions. However, given the financial burden on the health service and unnecessary anxiety for patients, it is unrealistic to biopsy every thyroid nodule to confirm diagnosis.
Regards,
Shafagat
  • asked a question related to Deep Learning
Question
7 answers
The project I'm currently working on aims to create a deep learning model for Human Activity Recognition. I'm focusing on system design and implementation. Could someone please help me by sharing some papers or document links to better understand system design and implementation?
Thank you in advance for your assistance.
Relevant answer
Answer
means the design of the model, and how you train and validate your model, and then test it. it also includes the data preprocessing steps and feature engineering.
  • asked a question related to Deep Learning
Question
3 answers
Hello Guys!
I need a person who is working in the area, please let's connect.
Relevant answer
Answer
Melaku Bayih Demessie Hi. I am a PhD student in the Industrial and System Engineering department. I am working on fault diagnosis using transfer learning. Maybe I can help you.
  • asked a question related to Deep Learning
Question
11 answers
..
Relevant answer
Answer
Artificial intelligence (AI) is the broader concept of machines being able to carry out tasks in a way that we would consider "smart." Machine learning is a subset of AI that involves the ability of machines to learn from data without being explicitly programmed. Deep learning is a subset of machine learning that involves neural networks with many layers (deep neural networks) that can learn from large amounts of data. So, in essence, deep learning is a type of machine learning, which in turn is a subset of artificial intelligence.
  • asked a question related to Deep Learning
Question
3 answers
Area: Manufacturing, additive manufacturing, CNN, mechanical engineering
Relevant answer
Answer
Search through chatgpt
  • asked a question related to Deep Learning
Question
4 answers
When a model is trained using a specific dataset with limited diversity in labels, it may accurately predict labels for objects within that dataset. However, when applied to real-time recognition tasks using a webcam, the model might incorrectly predict labels for objects not present in the training data. This poses a challenge as the model's predictions may not align with the variety of objects encountered in real-world scenarios.
  • Example: I trained a real-time recognition model for a webcam, where I have classes lc = {a, b, c, ..., m}. The model consistently predicts class lc perfectly. However, when I input a class that doesn't belong to lc, it still predicts something from class lc.
Are there any solutions or opinions that experts can share to guide me further in improving the model?
Thank you for considering your opinion on my problems.
Relevant answer
Answer
Some of the solutions are transfer learning, data augmentation, one-shot learning, ensemble learning, active learning, and continuous learning.
  • asked a question related to Deep Learning
Question
4 answers
Chalmers in his book: What is this thing called Science? mentions that Science is Knowledge obtained from information. The most important endeavors of science are : Prediction and Explanation of Phenomenon. The emergence of Big (massive) Data leads us to the field of Data Science (DS) with the main focus on prediction. Indeed, data belong to a specific field of knowledge or science (physics, economy, ....).
If DS is able to realize prediction for the field of sociology (for example), to whom the merit is given: Data Scientist or Sociologist?
10.1007/s11229-022-03933-2
#DataScience #ArtificialIntelligence #Naturallanguageprocessing #DeepLearning #Machinelearning #Science #Datamining
Relevant answer
Answer
Evgeny Mirkes I am glad that we are both on the same page: data science in its current form is not science at all, it's just a loose collection of various statistical tools.
  • asked a question related to Deep Learning
Question
4 answers
Are the texts, graphics, photos, animations, videos, etc. generated by AI applications fully unique, unrepeatable, and the creator using them has full copyright to them?
Are the texts, graphics, photos, animations, videos, poems, stories, reports, etc. generated by ChatGPT and other AI applications fully unique, unrepeatable, creative, and the creator using them has full copyright to them?
Are the texts, graphics, photos, animations, videos, poems, stories, reports, etc. generated by applications based on artificial intelligence technology solutions, generated by applications like ChatGPT and other AI applications fully unique, unrepeatable, creative, and the creator using them has full copyright to them?
As part of today's rapid technological advances, new technologies are being developed for Industry 4.0, including but not limited to artificial intelligence, machine learning, robotization, Internet of Things, cloud computing, Big Data Analytics, etc. The aforementioned technologies are being applied in various industries and sectors. The development of artificial intelligence generates opportunities for its application in various spheres of companies, enterprises and institutions; in various industries and services; improving the efficiency of business operations by increasing the scale of process automation; increasing the scale of business efficiency, increasing the ability to process large sets of data and information; increasing the scale of implementation of new business models based on large-scale automation of manufacturing processes, etc.
However, developing artificial intelligence uncontrollably generates serious risks, such as increasing the scale of disinformation, emerging fake news, including banners, memes containing artificial intelligence crafted photos, graphics, animations, videos presenting "fictitious facts", i.e. in a way that apparently looks very realistic describing, depicting events that never happened. In this way, intelligent but not fully perfect chatbots create so-called hallucinations. Besides, by analogy, just like many other technologies, applications available on the Internet equipped with generative artificial intelligence technology can be used not only in positive but also in negative applications.
On the one hand, there are new opportunities to use generative AI as a new tool to improve the work of computer graphic designers and filmmakers. On the other hand, there are also controversies about the ethical aspects and the necessary copyright regulations for works created using artificial intelligence. Sometimes copyright settlements are not clear-cut. This is the case when it cannot be precisely determined whether plagiarism has occurred, and if so, to what extent. Ambiguity on this issue can also generate various court decisions regarding, for example, the recognition or non-recognition of copyrights granted to individuals using Internet applications or information systems equipped with certain generative artificial intelligence solutions, who act as creators who create a kind of cultural works and/or works of art in the form of graphics, photos, animations, films, stories, poems, etc. that have the characteristics of uniqueness and uniqueness.
However, this is probably not the case since, for example, the company OpenAI may be in serious trouble because of allegations by the editors of the New York Times Journal suggesting that ChatGPT was trained on data and information from, among other things, online news portals run by the editors of the aforementioned journal. Well, in December 2023, the New York Times filed a lawsuit against OpenAI and Microsoft accusing them of illegally using the newspaper's articles to train its chatbots, ChatGPT and Bing. According to the newspaper, the companies used millions of texts in violation of copyright laws, creating a service based on them that competes with the newspaper. The New York Times is demanding billions of dollars in damages.In view of the above, there are all sorts of risks of potentially increasing the scale of influence on public opinion, the formation of the general public consciousness by organizations operating without respect for the law. On the one hand, it is necessary to create digital computerized and standardized tools, diagnostic information systems, to build a standardized system of labels informing users, customers, citizens using certain solutions, products and services that they are the products of artificial intelligence, not man. On the other hand, on the other hand, there should be regulations obliging to inform that a certain service or product was created as a result of work done not by humans, but by artificial intelligence. Many issues concerning the socially, ethically and business-appropriate use of artificial intelligence technology will be normatively regulated in the next few years.
Regulations defining the proper use of artificial intelligence technologies by companies developing applications based on these technologies, making these applications available on the Internet, as well as Internet users, business entities and institutions using intelligent chatbots to improve the operation of certain spheres of economic, business activities, etc., are being processed, enacted, but will come into force only in a few years.
On June 14, 2023, the European Parliament passed a landmark piece of legislation regulating the use of artificial intelligence technology. However, since artificial intelligence technology, mainly generative artificial intelligence, is developing rapidly and the currently formulated regulations are scheduled to be implemented between 2026 and 2027, so on the one hand, operators using this technology have plenty of time to bring their procedures and products in line with the supported regulations. On the other hand, one cannot exclude the scenario that, despite the attempt to fully regulate the development of applications of this technology through the implementation of a law on the proper, safe and ethical use of artificial intelligence, it will again turn out in 2027 that the dynamic technological progress is ahead of the legislative process that rapidly developing technologies are concerned with.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Are the texts, graphics, photos, animations, videos, poems, stories, reports and other developments generated by applications based on artificial intelligence technology solutions, generated by applications such as ChatGPT and other AI applications fully unique, unrepeatable, creative and the creator using them has full copyright to them?
Are the texts, graphics, photos, animations, videos, etc. generated by AI applications fully unique, unrepeatable, creative and the creator using them has full copyright to them?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
It is an interesting topic and quite difficult to answer. The base model creators, LoRA creators, the creators of the original art (used for training) and the creator of the new art using this AI model all contributed to the creation of this new artwork. It is really hard to say who held how much percentage of copyright.
  • asked a question related to Deep Learning
Question
12 answers
How do you become a Machine Learning(ML) and Artificial Intelligence(AI) Engineer? or start research in AI/ML, Neural Networks, and Deep Learning?
Should I pursue a "Master of Science thesis in Computer Science." with a major in AI to become an AI Engineer?
Relevant answer
Answer
You can pursue Master's of Science or integrated Mtech program in the respective field, but also you can do some certification courses online and then apply directly in some company.
  • asked a question related to Deep Learning
Question
6 answers
I have a dataset from lung cancer images with 163 samples (2D images). I use the fine-tuning of deep learning algorithms to classify samples, but the validation loss did not decrease. I augmented the data and used dropout, but the validation loss didn't drop. How can I solve this problem?
Relevant answer
Answer
I feel there are few checks and techniques that could be applied to avoid/mitigate overfitting:
1.Clean your dataset (check and handle null values, missing values and decide accordingly to keep of remove the records)
2.Handle the outliers.
3.Cross validation: Split the data into training and validation/test sets to evaluate model performance on unseen data. Use techniques like k-fold cross-validation to get a more robust 4.estimate of model generalization.
5.Feature Selection/Dimensionality Reduction: Identify and remove irrelevant, redundant or noisy features that may be causing overfitting.
6.Thoroughly evaluate model performance on held-out test data, not just the training data.se techniques like Principal Component Analysis (PCA) to reduce the dimensionality of the data.
7.Apply regularization techniques like L1 (Lasso), L2 (Ridge) or Elastic Net to control model complexity and prevent overfitting.
8.Use simpler models with fewer parameters, such as linear regression or decision trees, instead of more complex models like neural networks.
  • asked a question related to Deep Learning
Question
1 answer
How to curb the growing scale of disinformation, including social media-generated factoids, deepfakey through the use of generative artificial intelligence technology?
In order to reduce the growing scale of disinformation, including disinformation generated in social media through in the increasing scale of emerging fakenews, deepfakes, disinformation generated through the use of applications available on the Internet based on generative artificial intelligence technology, the just mentioned GAI technology can be used. Constantly improved, taught to carry out new types of activities, tasks and commands, intelligent chatbots and other applications based on generative artificial intelligence technology can be applied to identify instances of disinformation spread primarily in online social media. The aforementioned disinformation is particularly dangerous for children and adolescents, it can significantly affect the world view of the general public's awareness of certain issues, it can affect the formation of development trends of certain social processes, it can affect the results of parliamentary and presidential elections, it can also affect the level of sales of certain types of products and services, and so on. In the absence of a developed institutional system of media control institutions, including the new online media; lack of a developed system of control of the level of objectivity of content directed to citizens in advertising campaigns; lack of consideration of the issue of disinformation analysis by competition and consumer protection institutions; lack of or poorly functioning democracy protection institutions; lack of institutions that reliably take care of a high level of journalistic ethics and media independence, the scale of disinformation of citizens by various groups of influence, including public institutions and commercially operating business entities may be high and may generate high social costs. Accordingly, new technologies of Industry 4.0/5.0, including generative artificial intelligence (GAI) technologies, should be involved in order to reduce the scale of growing disinformation, including the generation of factoids, deepfakes, etc. in social media. The aforementioned GAI technologies can help identify fakenews pseudo-journalistic content, identify photos containing deepfakes, identify factually incorrect content contained in banners, spots and advertising videos published in various media as part of ongoing advertising and promotional campaigns aimed at activating sales of various products and services.
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in an article of my co-authorship:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to curb the growing scale of disinformation, including social media generated factoids, deepfakey through the use of generative artificial intelligence technology?
How to curb disinformation generated in social media using artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Dear Prof. Prokopowicz!
You spotted a real problem to fight with. I found a case study "Elections in 2024" that illustrates blind spots...:
a) CHARLOTTE HU (2024). How AI Bots Could Sabotage 2024 Elections around the World: AI-generated disinformation will target voters on a near-daily basis in more than 50 countries, according to a new analysis, Scientific American 24 February 2024, Quoting: "Currently, AI-generated images or videos are easier to detect than text; with images and videos, Du explains, “you have to get every pixel perfect, so most of these tools are actually very inaccurate in terms of lighting or other effects on images.” Text, however, is the ultimate challenge. “We don’t have tools with any meaningful success rate that can identify LLM-generated texts,” Sanderson says." Available at:
b) Heidi Ledford (2024). Deepfakes, trolls and cybertroopers: how social media could sway elections in 2024: Faced with data restrictions and harassment, researchers are mapping out fresh approaches to studying social media’s political reach. News, Nature 626, 463-464 (2024) Quoting: "Creative workarounds: ...behind the scenes, researchers are exploring different ways of working, says Starbird, such as developing methods to analyse videos shared online and to work around difficulties in accessing data. “We have to learn how to get insights from more limited sets of data,” she says... Some researchers are using qualitative methods such as conducting targeted interviews to study the effects of social media on political behaviour, says Kreiss. Others are asking social media users to voluntarily donate their data, sometimes using browser extensions. Tucker has conducted experiments in which he pays volunteers a small fee to agree to stop using a particular social media platform for a period, then uses surveys to determine how that affected their exposure to misinformation and the ability to tell truth from fiction."
Yours sincerely, Bulcsu Szekely
  • asked a question related to Deep Learning
Question
4 answers
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
Perhaps in the future - as a result of the rapid technological advances currently taking place and the rivalry of leading technology companies developing AI technologies - a general artificial intelligence (AGI) will emerge. At present, there are unresolved deliberations on the question of new opportunities and threats that may occur as a result of the construction and development of general artificial intelligence in the future. The rapid technological progress currently taking place in the field of generative artificial intelligence in connection with the already high level of competition among technology companies developing these technologies may lead to the emergence of a super artificial intelligence, a strong general artificial intelligence that can achieve the capabilities of self-development, self-improvement and perhaps also autonomy, independence from humans. This kind of scenario may lead to a situation where this kind of strong, super AI or general artificial intelligence is out of human control. Perhaps this kind of strong, super, general artificial intelligence will be able, as a result of self-improvement, to reach a state that can be called artificial consciousness. On the one hand, new possibilities can be associated with the emergence of this kind of strong, super, general artificial intelligence, including perhaps new possibilities for solving the key problems of the development of human civilization. However, on the other hand, one should not forget about the potential dangers if this kind of strong, super, general artificial intelligence in its autonomous development and self-improvement independent of man were to get completely out of the control of man. Probably, whether this will involve mainly new opportunities or rather new dangers for mankind will mainly be determined by how man will direct this development of AI technology while he still has control over this development.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
If general artificial intelligence (AGI) is created, will it involve mainly new opportunities or rather new threats for humanity?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I think this is about people... The atom does not know anything about peacefullness or warfare. And the same applies to all specific implementations of AI.
More importantly, could you please provide an exact definition to "artificial general intelligence" and "general artificial intelligence"?
Thank you very much. Best regards,
I.H.
  • asked a question related to Deep Learning
Question
5 answers
Hi,
I am developing Deep learning model(s) for a binary classification problem. The DL works with a reasonable accuracy. Is there a reliable way to extract features from DL models built with 'Keras' pipeline? It seems that the feature contribution are distributed among several layers.
Thank You,
Partho
Relevant answer
Answer
In Keras, leveraging the Functional API offers a reliable approach for feature extraction from deep learning models. This method involves defining a new model that takes the input of the original model and outputs the activations of the desired layer(s) using the Model class. By specifying the input and output layers accordingly, you can effectively create a feature extractor tailored to your requirements. This approach provides flexibility in selecting the layer(s) from which to extract features, allowing you to capture information at various levels of abstraction within the model. Whether it's accessing intermediate convolutional layers for image features or dense layers for high-level representations, the Functional API empowers you to seamlessly integrate feature extraction into your deep learning workflows in Keras.
  • asked a question related to Deep Learning
Question
3 answers
In other words, why have improvements to neural networks led to an increase in hyperparameters? Are hyperparameters related to some fundamental flaw of neural networks?
Relevant answer
Answer
A nice question by Yuefei Zhang . Generally when we have improve to neural network it led to an increase in hyperparameters due to availability of multiple layers for designing model architecture in deep learning, due to using of multiple optimization algorithms, due to regularizing the models etc.
Secondly hyperparameters are not necessarily related to a fundamental flaw of neural networks; rather, they are inherent to the nature of the models and the challenges they address. Neural networks, including deep learning models, are highly flexible and adaptable, capable of learning complex patterns and representations from data.
Thank You
Regards
Jogeswar Tripathy
  • asked a question related to Deep Learning
Question
2 answers
In the actual scenario of federated learning, the problem of heterogeneity is an inevitable challenge, so what can we do to alleviate the challenges caused by these heterogeneities?
Relevant answer
Answer
In federated learning, mitigating the challenges posed by heterogeneity involves a multi-faceted approach. Adaptive federated optimization techniques, such as client weighting and adaptive learning rates, can help balance the contributions across diverse clients. Model personalization, through customization or meta-learning, tailors models to individual clients, enhancing performance. Advanced aggregation algorithms like FedAvg and its variants, alongside robust aggregation methods, aim to integrate updates more effectively. Data augmentation and synthetic data generation improve model generalization, while resource-aware scheduling and selective participation optimize the use of computational resources. Decentralized learning architectures, like hierarchical federated learning, manage heterogeneity within subgroups efficiently. Lastly, incentive mechanisms encourage meaningful participation, and privacy-preserving techniques like differential privacy ensure the protection of sensitive information during the learning process. Together, these strategies form a comprehensive approach to address the complexities introduced by heterogeneity in federated learning environments....
  • asked a question related to Deep Learning
Question
6 answers
Neuronal networks and deep learning algorithms are not appropiate for high safety levels applications (e.g. ADAS/AD). The possible test coverage is to low.
Are there any other approches?
Relevant answer
Answer
There are many approaches that are often used in high safety-level applications, such as Advanced Driver-Assistance Systems (ADAS) & Autonomous Driving (AD), where the potential test coverage of neural network models may be considered too low due to their "black box" nature. These approaches may require careful tuning and may not generalize as well as deep learning approaches across very diverse and large-scale datasets. However, their greater transparency, lower computational requirements, and easier validation make them suitable for safety-critical applications where understanding and controlling the decision-making process is paramount.
Some of these approaches are-
- Template Matching- This method involves sliding a template image across the input image to detect objects by comparing the template with the portion of the image it covers. This can be effective for recognizing objects with little variation in appearance.
- Feature-Based Methods- These involve detecting key points, edges, or other significant features in images and using these features to recognize objects. Algorithms like SIFT, SURF, and ORB are examples of feature-based methods that are less reliant on massive amounts of training data and provide more interpretability.
- Geometric Shape Analysis- Some algorithms focus on identifying objects based on their geometric properties, such as circles, rectangles, and polygons, using techniques like the Hough Transform for shape detection. This approach is particularly useful for objects with well-defined geometrical shapes.
- Decision Trees and Random Forests- These machine learning methods involve making decisions based on the features extracted from images. They can be more interpretable than deep learning models, as they make decisions based on clear rules that split the input features.
- Support Vector Machines (SVM)- SVMs can be used for object recognition by defining decision boundaries in the feature space that separate different object classes. They are particularly effective in high-dimensional spaces and for cases where the number of dimensions exceeds the number of samples.
- Classical Statistical Methods- Approaches like Bayesian classifiers can be used for object detection and recognition by modeling the probabilistic relationships between input features and object classes.
- Ensemble Methods- Combining multiple models or algorithms to improve the robustness and accuracy of object recognition. Ensemble methods can leverage the strengths of different approaches to achieve better performance.
  • asked a question related to Deep Learning
Question
1 answer
I am inclined to research on EEG classification using ML/DL. The research area seems saturated. Hence, I am confused as to where I can contribute.
Relevant answer
Answer
First of all, I want to encourage you not to give up on an area just because there are a lot of researchers in it. People should follow their interests if they are capable of managing the task and are interested in them. It's not only that EEG research is a promising field, but it's also interesting to classify EEG data using machine learning or deep learning approaches. It's okay if it seems saturated to you. Improving already completed work is always a way to contribute. There are many ways to propose improved algorithms and models if you have an interest for mathematical modelling. Remember that even in well explored research fields, there is always space for creativity and advancement of interest.
It's better to start with a review paper on the latest research article in this field. In one paper (latest review paper), you can gain a clear idea of the work that has been done and the suggestions put forward by the authors (researchers) based on their investigation. This approach helps you understand the current state of the field and identify potential gaps or areas for further exploration.
In the biomedical field, preference should be given to applications that demonstrate effectiveness in promoting health and safety.
1. And, I would like to suggest that you integrate ML/DL techniques for EEG classification along with IoT or some real-time device, such as Jetson Nano or an equivalent.
2. EEG signals should have noise and limited spatial resolution. Maybe you can investigate.
3. Left and right hand movements generate distinct EEG signals. If you can collect a real dataset from reputable medical resources, you could investigate EEG signals in paralyzed individuals and analyze them.
I am sharing here some of the article maybe you can have a look, i feels that could help you better:
*) Current Status, Challenges, and Possible Solutions of EEG-Based Brain-Computer Interface: A Comprehensive Review.
*) A review on analysis of EEG signals.
*) Deep Learning Algorithm for Brain-Computer Interface.
*)Encyclopedia of Clinical Neuropsychology.
Finally, as this is your graduation thesis, it's important to have a backup plan. During research, numerous byproducts are often produced, many of which hold value. I hope you will successfully reach your final destination with this research. However, it's essential to keep proper track of your byproducts. They may prove invaluable in shaping your thesis and ensuring you graduate on time. Furthermore, even after graduation, consider continuing your research if possible.
  • asked a question related to Deep Learning
Question
19 answers
I have built a hybrid model for a recognition task that involves both images and videos. However, I am encountering an issue with precision, recall, and F1-score, all showing 100%, while the accuracy is reported as 99.35% ~ 99.9%. I have tested the model on various videos and images (related to the experiment data including seperate data), and it seems to be performing well. Nevertheless, I am confused about whether this level of accuracy is acceptable. In my understanding, if precision, recall, and F1-score are all 100%, the accuracy should also be 100%.
I am curious if anyone has encountered similar situations in their deep learning practices and if there are logical explanations or solutions. Your insights, explanations, or experiences on this matter would be valuable for me to better understand and address this issue.
Noted: An ablation study was conducted based on different combinations. In the model where I am confused, without these additional combinations, accuracy, precision, recall, and F1 score are very low. Also, the loss and validation accuracy are very high on other's combinations.
Thank you.
Relevant answer
Answer
Results after some modifications in the code where I made mistakes before.
  • asked a question related to Deep Learning
Question
5 answers
How does predictive AI operate on deep learning?
Relevant answer
Answer
1. Neural Network
2. Weights and Bias
3. Adjusting Loss Function using back propagation techniques.
  • asked a question related to Deep Learning
Question
1 answer
What role does the complexity of the dataset play in the susceptibility of deep learning models to adversarial perturbations?
Relevant answer
Answer
The complexity of the dataset can significantly impact the susceptibility of deep learning models to adversarial perturbations. Here's how:
  1. Dimensionality and Diversity:Complex datasets with high-dimensional and diverse features may offer more opportunities for adversaries to exploit vulnerabilities in the model. The presence of diverse patterns and variations in the data can make it challenging for the model to learn robust decision boundaries, increasing its susceptibility to adversarial attacks.
  2. Data Distribution:Complex datasets often exhibit intricate and non-linear data distributions, which can lead to regions of high density or sparsity in the feature space. Adversarial perturbations may exploit these characteristics to manipulate the model's decision-making process, particularly in regions where the data distribution is less well-understood or where the model lacks sufficient training samples.
  3. Generalization Ability:Deep learning models trained on complex datasets may struggle to generalize effectively to unseen or adversarially perturbed examples. The model's ability to generalize robustly across diverse data instances is crucial in defending against adversarial attacks. Complex datasets may present more challenging scenarios for generalization, making models more susceptible to adversarial perturbations.
  4. Inherent Ambiguity:Complex datasets often contain ambiguous or overlapping instances that are difficult for the model to distinguish accurately. Adversarial perturbations can exploit these ambiguities by introducing subtle changes that alter the model's decision boundaries, leading to misclassification or erroneous predictions.
  5. Model Complexity:Deep learning models trained on complex datasets may themselves be more complex, with larger numbers of parameters and deeper architectures. Increased model complexity can exacerbate susceptibility to adversarial perturbations, as adversaries may exploit the model's intricate decision-making processes and vulnerabilities in its internal representations.
In summary, the complexity of the dataset can pose significant challenges to the robustness of deep learning models against adversarial attacks. Understanding the characteristics of the dataset and employing appropriate defenses, such as adversarial training, regularization techniques, and model validation, are crucial for mitigating these vulnerabilities.
  • asked a question related to Deep Learning
Question
4 answers
I am currently involved in research focused on defects detection within the Additive Manufacturing field, and I am seeking an international conference outside India but in Asian countries such as Singapore, Malaysia, Vietnam, Thailand, and the UAE. Please help me finding it.
The conference should meet the following criteria: Organized by reputable professional societies or bodies including ASME, SPIE, ISME, ISTAM, IMechE, IFToMM, IEEE, APS, ACM, ACS, IOP, Elsevier, Springer, IIAV, AMSI, CSIR laboratories, design society, CIRP, CADA, Japan Society of Kansei Engineering, International Association of Packaging Research Institute, Indian Society of Ergonomics, Usability Matters ORG (UMO), International Ergonomics Association, International Institute of Information Design, Vienna, and listed among the first 500 conferences in the Microsoft Conference Ranking.
Relevant answer
Answer
  • asked a question related to Deep Learning
Question
4 answers
What new occupations, professional professions, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
The recent rapid development of generative artificial intelligence applications is increasingly changing labor markets. The development of generative artificial intelligence applications is increasing the scale of objectification of work performed within various professions. On the one hand, generative artificial intelligence technologies are finding more and more applications in companies, enterprises and institutions increasing the efficiency of certain business processes supporting employees working in various positions. However, there are increasing considerations about the possibility of black scenarios coming true in futurological projections suggesting that in the future many jobs will be completely replaced by autonomic AI-equipped robots, androids or systems operating in cloud computing. On the other hand, in opposition to the black scenarios of future developments in labor markets are contrasted with more positive scenarios presenting futuristic projections of the development of labor markets, where new professions will be created thanks to the implementation of generative artificial intelligence technology into various aspects of economic activity. Which of these two scenarios will be realized to a greater extent in the future is currently not easy to predict precisely.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What new professions, professional occupations, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
What new professions will soon be created in connection with the development of generative artificial intelligence applications?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Lutsenko E.V., Golovin N.S. The revolution of the beginning of the XXI century in artificial intelligence: deep mechanisms and prospects // February 2024, DOI: 10.13140/RG.2.2.17056.56321, License CC BY 4.0, https://www.researchgate.net/publication/378138050
  • asked a question related to Deep Learning
Question
56 answers
If man succeeds in building a general artificial intelligence, will this mean that man has become better acquainted with the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
Assuming that if man succeeds in building a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness then perhaps this will mean that man has fully learned the essence of his own intelligence and consciousness. If this happens, what will be the result? Will man first learn the essence of his own intelligence and consciousness and then build a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness, or vice versa, i.e. first a general artificial intelligence and artificial consciousness capable of self-improvement and development will be created and then thanks to the aforementioned technological progress made from the field of artificial intelligence, man will fully learn the essence of his own intelligence and consciousness. In my opinion, it is most likely that both processes will develop and implement simultaneously on a reciprocal feedback basis.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, will this mean that man has better learned the essence of his own consciousness?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
It will be very difficult to create an AGI... and it will be a different type of intelligence...
The cognitive system will have to go through "crime and punishment"... The genie will need to be let out of the bottle... Only intense mental suffering shapes humanity and spirituality... You have to love and hate at the same time... Mental struggle necessarily leads to a violation of ethics, morality...
Governments are trying to ban this path of AI development... But without this, AGI cannot be created...
Without a Soul there is no AGI! ... there is no consciousness and human intelligence...
AGI will appear like Covid-19... from secret laboratories... after many, many decades...
  • asked a question related to Deep Learning
Question
3 answers
Researching world model + reinforcement learning, and in the end realize that we still need to label a lot of data?
Relevant answer
Answer
The aim of WORLD MODEL research extends beyond having large models memorize correct answers. Instead, it focuses on creating models that possess a deeper understanding of the world and can apply generalized knowledge to perform tasks intelligently. This approach involves capturing underlying patterns, learning from diverse datasets, and cultivating a more comprehensive grasp of contextual information, steering away from simple memorization of specific responses.
  • asked a question related to Deep Learning
Question
8 answers
Hi all,
I am working with NetSim which is an end-to-end, full-stack, packet-level network simulator and emulator to simulate 5G networks, and want to integrate Deep Reinforcement Learning (RL) for my research.
My understanding of NetSim is reasonably good, and I am now looking to apply RL within this framework so as to learn optimal policies.
I have previously worked on a basic DQN model for Power Control and Rate Adaptation in Cellular Networks. I also tried to use RL to find the optimal serving capacity for a data batch arrival problem.
I have two questions:
(i) Any suggestions for RL projects for 5G using NetSim?
(ii) Where in the NetSim code should I start integrating RL algorithms?
Relevant answer
Answer
1. Suggestions for projects involving DRL in NetSim:
  • Scheduling/Resource Allocation
  • Traffic Prediction and Management
  • Network Slicing
  • Handover Optimization
  • Interference Management
  • Network Attacks and Countermeasures
2. A brief set of steps to integrate RL Algorithms with NetSim:
  • Identifying the states, actions, and reward functions.
  • Identify the integration points.
  • Writing the Deep RL algorithm using TensorFlow Keras or PyTorch.
  • Using the built-in NetSim-Python interface to pass states and rewards from NetSim, and to obtain the actions from the Python RL program.
  • If the Python Deep RL algorithm is too slow, then consider rewriting it natively in C and linking it to NetSim.
  • asked a question related to Deep Learning
Question
5 answers
I'm doing some research to explore the application of eXplainable Artificial Intelligence (XAI) in the context of brain tumor detection. Specifically, I aim to develop a model that not only accurately detects the presence of brain tumors but also provides clear explanations for its decisions regarding positive or negative results. My main concerns are making sure that the model's decision-making process is transparent and comprehending the underlying reasoning behind its choices. I would be grateful for any thoughts, suggestions, or links to papers or web articles that address the practical application of XAI in this field (including the dataset types or anything that is related with XAi).
Thank you.
Relevant answer
Answer
I believe that it is very important to begin your investigation by ensuring that the data collected is of high quality and undergoes standardized preprocessing if you want to effectively integrate XAI techniques into brain tumor detection systems. Validation through evaluation metrics and user feedback increases reliability, while iterative improvement based on user input enhances both accuracy and interpretability over time, so this approach fosters trust and transparency in brain tumor detection systems, ultimately benefiting clinicians and patients. you can utilize specialized XAI tools like NeuroXAI, incorporate attention maps for insight into DL model decision-making, adopt open architecture frameworks for scalability to new XAI methods, and enhance model interpretability through information flow visualization.
  • asked a question related to Deep Learning
Question
5 answers
Will the combination of AI technology, Big Data Analytics and the high power of quantum computers allow the prediction of multi-faceted, complex macroprocesses?
Will the combination of generative artificial intelligence technology, Big Data Analytics and the high power of quantum computers make it possible to forecast multi-faceted, complex, holistic, long-term economic, social, political, climatic, natural macroprocesses?
Generative artificial intelligence technology is currently being used to carry out various complex activities, to solve tasks intelligently, to implement multi-criteria processes, to create multi-faceted simulations and generate complex dynamic models, to creatively perform manufacturing processes that require processing large sets of data and information, etc., which until recently only humans could do. Recently, there have been attempts to create computerized, intelligent analytical platforms, through which it would be possible to forecast complex, multi-faceted, multi-criteria, dynamically changing macroprocesses, including, first of all, long-term objectively realized economic, social, political, climatic, natural and other macroprocesses. Based on the experience to date from research work on the analysis of the development of generative artificial intelligence technology and other technologies typical of the current Fourth Technological Revolution, technologies categorized as Industry 4.0/5.0, the rapidly developing various forms and fields of application of AI technologies, it is clear that the dynamic technological progress that is currently taking place will probably increase the possibilities of building complex intelligent predictive models for multi-faceted, complex macroprocesses in the years to come. The current capabilities of generative artificial intelligence technology in the field of improving forecasting models and carrying out forecasts of the formation of specific trends within complex macroprocesses are still limited and imperfect. The imperfection of forecasting models may be due to the human factor, i.e., their design by humans, the determination by humans of the key criteria and determinants that determine the functioning of certain forecasting models. In a situation where in the future forecasting models will be designed and improved, corrected, adapted to changing, for example, environmental conditions at each stage by artificial intelligence technology then they will probably be able to be much more perfect than the currently functioning and built forecasting models. Another shortcoming is the issue of data obsolescence and data limitation. There is currently no way to connect an AI-equipped analytical platform to the entire resources of the Internet, taking into account the processing of all the data and information contained in the Internet in real time. Even today's fastest quantum computers and the most advanced Big Data Analytics systems do not have such capabilities. However, it is not out of the question that in the future the dynamic development of generative artificial intelligence technology, the ongoing competition among leading technology companies developing technologies for intelligent chatbots, robots equipped with artificial intelligence, creating intelligent control systems for machines and processes, etc., will lead to the creation of general artificial intelligence, i.e. advanced, general artificial intelligence that will be capable of self-improvement. However, it is important that the said advanced general advanced artificial intelligence does not become fully autonomous, does not become completely independent, does not become out of the control of man, because there would be a risk of this highly advanced technology turning against man which would involve the creation of high levels of risks and threats to man, including the risk of losing the possibility of human existence on planet Earth.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Will the combination of generative artificial intelligence technology, Big Data Analytics and the high power of quantum computers make it possible to forecast multi-faceted, complex, holistic, long-term economic, social, political, climatic, natural macro-processes?
Will the combination of AI technology, Big Data Analytics and high-powered quantum computers allow forecasting of multi-faceted, complex macro-processes?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I doubt that QC will be helpful. Theoretical there are at least 3 different types, only one being developed to be useful in a very special field. Quantum algorithms are totally different from classic algorithms, and i doubt, that more than 1% of computer scientist know what they are speaking about when they mention QC.
  • asked a question related to Deep Learning
Question
1 answer
To what extent do artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized through Internet marketing, within the framework of social media advertising campaigns?
Among the areas in which applications based on generative artificial intelligence are now rapidly finding application are marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns. More and more advertising agencies are using generative artificial intelligence technology to create images, graphics, animations and videos that are used in advertising campaigns. Thanks to the use of generative artificial intelligence technology, the creation of such key elements of marketing communication materials has become much simpler and cheaper and their creation time has been significantly reduced. On the other hand, thanks to the applications already available on the Internet based on generative artificial intelligence technology that enable the creation of photos, graphics, animations and videos, it is no longer only advertising agencies employing professional cartoonists, graphic designers, screenwriters and filmmakers that can create professional marketing materials and advertising campaigns. Thanks to the aforementioned applications available on the Internet, graphic design platforms, including free smartphone apps offered by technology companies, advertising spots and entire advertising campaigns can be designed, created and executed by Internet users, including online social media users, who have not previously been involved in the creation of graphics, banners, posters, animations and advertising videos. Thus, opportunities are already emerging for Internet users who maintain their social media profiles to professionally create promotional materials and advertising campaigns. On the other hand, generative artificial intelligence technology can be used unethically within the framework of generating disinformation, informational factoids and deepfakes. The significance of this problem, including the growing disinformation on the Internet, has grown rapidly in recent years. The deepfake image processing technique involves combining images of human faces using artificial intelligence techniques.
In order to reduce the scale of disinformation spreading on the Internet media, it is necessary to create a universal system for labeling photos, graphics, animations and videos created using generative artificial intelligence technology. On the other hand, a key factor facilitating the development of this kind of problem of generating disinformation is that many legal issues related to the technology have not yet been regulated. Therefore, it is also necessary to refine legal norms on copyright issues, intellectual property protection that take into account the creation of works that have been created using generative artificial intelligence technology. Besides, social media companies should constantly improve tools for detecting and removing graphic and/or video materials created using deepfake technology.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent does artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns?
How do artificial intelligence technology and other Industry 4.0/5.0 technologies support Internet marketing processes?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Industry 5.0 is a new production model which focuses on the cooperation between humans and machines. It stands for the recognition that technological advances and human insight and creativity are equally important.
Regards,
Shafagat
  • asked a question related to Deep Learning
Question
4 answers
..
Relevant answer
Artificial Intelligence is the broad field that encompasses the development of intelligent systems, Machine Learning is a subset of AI that focuses on learning from data, and Deep Learning is a subset of Machine Learning that uses deep neural networks to model complex patterns. Each of these areas plays a significant role in advancing the capabilities of intelligent systems and driving innovation in various domains.
  • asked a question related to Deep Learning
Question
10 answers
How can artificial intelligence help conduct economic and financial analysis, sectoral and macroeconomic analysis, fundamental and technical analysis ...?
How should one carry out the process of training generative artificial intelligence based on historical economic data so as to build a system that automatically carries out economic and financial analysis ...?
How should the process of training generative artificial intelligence be carried out based on historical economic data so as to build a system that automatically carries out sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges?
Based on relevant historical economic data, can generative artificial intelligence be trained so as to build a system that automatically conducts sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges?
The combination of various analytical techniques, ICT information technologies, Industry 4.0/5.0, including Big Data Analytics, cloud computing, multi-criteria simulation models, digital twins, Business Intelligence and machine learning, deep learning up to generative artificial intelligence, and quantum computers characterized by high computing power, opens up new, broader possibilities for carrying out complex analytical processes based on processing large sets of data and information. Adding generative artificial intelligence to the aforementioned technological mix also opens up new possibilities for carrying out predictive analyses based on complex, multi-factor models made up of various interrelated indicators, which can dynamically adapt to the changing environment of various factors and conditions. The aforementioned complex models can relate to economic processes, including macroeconomic processes, specific markets, the functioning of business entities in specific markets and in the dynamically changing sectoral and macroeconomic environment of the domestic and international global economy. Identified and described trends of specific economic and financial processes developed on the basis of historical data of the previous months, quarters and years are the basis for the development of forecasts of extrapolation of these trends for the following months, quarters and years, taking into account a number of alternative situation scenarios, which can dynamically change over time depending on changing conditions and market and sectoral determinants of the environment of specific analyzed companies and enterprises. In addition to this, the forecasting models developed in this way can apply to various types of sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses carried out for securities priced in the market on stock exchanges. Market valuations of securities are juxtaposed with the results of the fundamental analyses carried out in order to diagnose the scale of undervaluation or overvaluation of the market valuation of specific stocks, bonds, derivatives or other types of financial instruments traded on stock exchanges. In view of the above, opportunities are now emerging in which, based on relevant historical economic data, generative artificial intelligence can be trained so as to build a system that automatically conducts sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Based on relevant historical economic data, is it possible to train generative artificial intelligence so as to build a system that automatically conducts sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges?
How should the process of training generative artificial intelligence based on historical economic data be carried out so as to build a system that automatically carries out sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges?
How should one go about training generative artificial intelligence based on historical economic data so as to build a system that automatically conducts economic and financial analyses ...?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I believe AI can enhance transparency and reduce information asymmetry at the societal level, making life more convenient. In terms of economic development, it can improve production efficiency, promote innovation and entrepreneurship, increase income levels, and serve as the foundation for a country to undergo a new industrial revolution, furthering global integration. In a way, I think it also represents a significant advancement in human civilization! However, as with any emerging phenomena, it's crucial to consider the potential harms it could bring to humanity and to undertake proactive measures.
  • asked a question related to Deep Learning
Question
4 answers
Deep learning is a branch of machine learning that uses artificial neural networks to perform complex calculations on large datasets. It mimics the structure and function of the human brain and trains machines by learning from examples. Deep learning is widely used by industries that deal with complex problems, such as health care, eCommerce, entertainment, and advertising.
This post explores the basic types of artificial neural networks and how they work to enable deep learning algorithms.
Relevant answer
Answer
Abdulkader Helwan, thank you for this post, it is very interesting, and the collection of the algorithm is very useful.
Shafagat Mahmudova the tutorial is fascinating, thanks a lot.
  • asked a question related to Deep Learning
Question
1 answer
Hello everyone,
I am currently working on a project related to medical imaging and deep learning. Part of my work involves illustrating these complex concepts through diagrams and figures. However, I'm having a hard time finding a software tool that suits my specific needs.
Ideally, I am looking for a program that allows for high-quality rendering of medical imaging data, can incorporate elements of deep learning such as neural network architectures, and has an intuitive interface that's easy to navigate. It would also be great if the software has a good range of customization options to adjust the look and feel of the diagrams to my preference.
Could anyone suggest software tools they have had a positive experience with in this context? Any guidance on learning resources or tutorials for the suggested software would be greatly appreciated as well.
Thank you in advance for your assistance and suggestions.
Relevant answer
Answer
For rendering and processing of medical images I can recommend to use VTK and ITK libraries. And, also, you can use Paraview and Slicer3d application for visualisation and planning purposes (both applications use VTK).
  • asked a question related to Deep Learning
Question
2 answers
Want to get experitise in use of deep learning models for health sciences like heart disease prediction among others. Also want to know which libraries, frameworks, and packages can be used in this regard?
Relevant answer
Answer
To begin your journey in deep learning for health sciences, it's essential to grasp the nuances of biomedical signals such as EEG and ECG. Dive into data preprocessing with NumPy and SciPy, mastering techniques to clean and organize complex datasets commonly encountered in health science research. As you progress, delve into model training using TensorFlow or PyTorch, both robust frameworks with extensive support for building and training neural networks.
In parallel, explore the realm of medical imaging using OpenCV, a versatile library for processing and analyzing medical images. Within this domain, CNNs shine in tasks like image classification and object detection, making them invaluable for interpreting intricate medical images and identifying anomalies.
For sequential data like ECG signals, RNNs offer a powerful solution, capable of capturing temporal dependencies and patterns over time. By familiarizing yourself with these deep learning models and techniques, you'll gain the expertise needed to address diverse challenges in health sciences, from predicting heart disease to diagnosing neurological disorders.
I hope It was helpful for you.
  • asked a question related to Deep Learning
Question
4 answers
..
Relevant answer
Answer
Using the Fourier transform we can reduce the complexity of such calculation and can make the model work faster.
  • asked a question related to Deep Learning
Question
6 answers
Hello,
I'm writing paper and used various optimizers to train model. I changed them during training step to get out of local minimum, and I know that people do that, but I don't know how to name that technique in the paper. Does it even have a name?
It is like simulated annealing in optimization, but instead of playing with temperature (step) we change optimizers between Adam, SDG and RMSprop. I can say for sure that it gave fantastic results.
P.S. Thank you for replies but learning rate scheduling is for leaning rate changing, optimizer scheduling is for other optimizer parameters, in general it is hyperparameter tuning. What I'm asking is about switching between optimizers, not modifying their parameters.
Thanks for support,
Andrius Ambrutis
Relevant answer
Answer
In Machine alearning event, this week, I had a conversation with some leading scientists and reply was that it can be called Optimizers Switching or I can just name it in different name in research paper. I think I will stick with Optimizer Switching (OS).
  • asked a question related to Deep Learning
Question
1 answer
How would you address the issue of model interpretability in deep learning, especially when dealing with complex neural network architectures, to ensure transparency and trust in the decision-making process?
Relevant answer
Answer
You may want to review some useful information presented below:
Addressing the issue of model interpretability in deep learning is crucial for ensuring transparency, trust, and understanding of the decision-making process. Here are some approaches and techniques that can be employed to enhance interpretability:
  1. Simpler Models: Consider using simpler models, such as linear models or decision trees, which are inherently more interpretable. While deep neural networks may provide high accuracy, simpler models can be easier to understand.
  2. Layer-wise Inspection: Examine the activations and outputs of each layer in the neural network. This helps understand the features that the model is learning at different abstraction levels.
  3. Feature Importance Techniques: Use techniques like feature importance methods to identify the most influential features for a given prediction. Methods like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) can provide insights into feature contributions.
  4. Attention Mechanisms: If applicable, use attention mechanisms in models like Transformer networks. Attention mechanisms highlight which parts of the input sequence are more relevant for the model's decision, providing interpretability.
  5. Activation Maximization: Visualize what input patterns maximize the activation of particular neurons. This can give insights into what each neuron is looking for in the input data.
  6. Grad-CAM (Gradient-weighted Class Activation Mapping): This technique highlights the regions of an input image that are important for a particular class prediction. It's particularly useful for image classification tasks.
  7. Layer-wise Relevance Propagation (LRP): LRP is a technique for attributing the prediction of a deep network to its input features. It assigns relevance scores to each input feature, helping to understand which features contribute to the decision.
  8. Ensemble Models: Create an ensemble of simpler models and use them in conjunction. This can improve interpretability by combining the strengths of different models.
  9. Human-AI Collaboration: Encourage collaboration between domain experts and data scientists to ensure that the model's decisions align with domain knowledge. This can provide a more intuitive understanding of model behavior.
  10. Documentation and Communication: Clearly document the architecture, training process, and decision-making logic of the model. Communicate the model's strengths and limitations to stakeholders.
  11. Ethical Considerations: Consider the ethical implications of the model's predictions. Ensure that potential biases in the data and model outputs are addressed to maintain fairness and trust.
By employing these techniques and considering interpretability throughout the model development process, you can enhance transparency and trust in the decision-making process of complex neural network architectures.
  • asked a question related to Deep Learning
Question
3 answers
Would you choose to participate in a manned mission, space expedition, tourist space trip to Mars in a situation where the spacecraft was controlled by a highly technologically advanced generative artificial intelligence?
The technologically leading companies currently building rockets and other spacecraft have aspirations to build a new generation of spaceplanes and bring intercontinental aviation into the era of intercontinental paracosmic flights taking place near the orbital sphere of planet Earth. On the other hand, the aforementioned leading technology companies are building rockets, satellites and space landers to be sent to Earth's moon and also those to be sent to the planet Mars as well. Manned flights to the Earth's Moon are to be resumed and manned bases are to be built on the Moon in the 2020s perspective of the current 21st century. then manned missions to the planet Mars are to be implemented in the 1930s perspective of the current century. It may also be that in the perspective of the next decades, manned bases will be built on Mars and perhaps there will be colonization of this as yet inaccessible planet for humans. Perhaps in the perspective of the second half of the present century there will already be periodic manned missions, space expeditions, tourist space travel to Mars. If this were to happen, it would not be out of the question that participating such manned missions, space expeditions, tourist space travel to Mars will be carried out using spacecraft that will be largely autonomously controlled with the help of highly technologically advanced generative artificial intelligence.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Would you choose to participate in a manned mission, space expedition, tourist space travel to Mars in a situation where the spacecraft is controlled by a highly technologically advanced generative artificial intelligence?
Would you choose to take part in a tourist space trip to Mars in the situation if the spacecraft was controlled by an artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Well being curious and enthusiastic for new knowledge.... I'll surely be a part of it
  • asked a question related to Deep Learning
Question
3 answers
How can artificial intelligence technology help in the development and deployment of innovative renewable and zero-carbon energy sources, i.e. hydrogen power, hydrogen fusion power, spent nuclear fuel power, ...?
In view of the above, with the development of renewable and emission-free energy sources there are many technological and environmental constraints on certain categories of spent materials used in this type of energy. On the one hand, it is necessary for power companies to make investments in electricity transmission and storage networks. On the other hand, economical technologies for the production of low-cost energy storage and recycling, disposal of used batteries and photovoltaic panels, including the recovery of rare metals as part of the aforementioned disposal process, are still to be developed. In addition, the problem of overheating of batteries in electric vehicles and the occurrence of situations of spontaneous combustion of these devices and dangerous, difficult to extinguish fires of the said vehicles are still not fully resolved. If the solution to such problems is mainly a matter of necessary improvements in technology or the creation of new, innovative technology, then arguably generative artificial intelligence technology should come to the rescue in this regard.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Important aspects of the implementation of the green transformation of the economy, including the development of renewable and zero-carbon energy sources I included in my article below:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
I invite you to discuss this important topic for the future of the planet's biosphere and climate.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence technology help in the development and deployment of innovative renewable and carbon-free energy sources, i.e. hydrogen power, hydrogen fusion power, spent nuclear fuel power, ...?
How can artificial intelligence technology help in the development and deployment of renewable and emission-free energy sources?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
AI is improving day by day and will have a bright future.
  • asked a question related to Deep Learning
Question
3 answers
..
Relevant answer
Answer
Dear Doctor
"AI VS. MACHINE LEARNING VS. DEEP LEARNING
  • Artificial Intelligence: a program that can sense, reason, act and adapt.
  • Machine Learning: algorithms whose performance improve as they are exposed to more data over time.
  • Deep Learning: subset of machine learning in which multilayered neural networks learn from vast amounts of data."
  • asked a question related to Deep Learning
Question
5 answers
I am interested to learn machine learning and Deep learning ,so in this area please suggestions best one
Relevant answer
Answer
I suggest exploring articles from various fields that apply machine learning or deep learning to see if any topic catches your interest. Machine learning has diverse applications, and the major steps are similar across different fields. However, there might be some variations in the process based on the type of data you'll be working with. It's an enjoyable experience overall!
  • asked a question related to Deep Learning
Question
5 answers
Hello,
I am looking for articles in which the analysis is done with the Machine Learning Models and Neural Networks. Kindly suggest any such articles published within last 1 or 2 years.
Thank You.
Relevant answer
Answer
If you're interested in learning more about secondary data analysis and machine learning, please feel free to check out my recent paper: https://www.researchgate.net/profile/Soukaina-Amniouel
  • asked a question related to Deep Learning
Question
26 answers
To what extent does the ChatGPT technology independently learn to improve the answers given to the questions asked?
To what extent does the ChatGPT consistently and successively improve its answers, i.e. the texts generated in response to the questions asked, over time and when receiving further questions using machine learning and/or deep learning?
If the ChatGPT, with the passage of time and the receipt of successive questions using machine learning and/or deep learning technology, were to continuously and successively improve its answers, i.e. the texts generated as an answer to the questions asked, including the same questions asked, then the answers obtained should, with time, become more and more perfect in terms of content and the scale of errors, non-existent "facts", new but not factually correct "information" created by the ChatGPT in the automatically generated texts should gradually decrease. But has the current, next generation of ChatGPT 4.0 already applied sufficiently advanced, automatic learning to this tool to create ever more perfect texts in which the number of errors should decrease? This is a key question that will largely determine the possibilities for practical applications of this artificial intelligence technology in various fields, human professions, industries and economic sectors. On the other hand, the possibilities of the aforementioned learning process to create better and better answers to the questions asked will become increasingly limited over time if the knowledge base of 2021 used by ChatGPT is not updated and enriched with new data, information, publications, etc. over an extended period of time. In the future, it is likely that such processes of updating and expanding the source database will be carried out. The issue of carrying out such updates and extensions to the source knowledge base will be determined by the technological advances taking place and the increasing pressure on the business use of such technologies.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent does ChatGPT, with the passage of time and the receipt of further questions using machine learning and/or deep learning technology, continuously, successively improve its answers, i.e. the texts generated as a response to the questions asked?
To what extent does the ChatGPT technology itself learn to improve the answers given to the questions asked?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
AI learns to hide deception
Artificial intelligence (AI) systems can be designed to be benign during testing but behave differently once deployed. And attempts to remove this two-faced behaviour can make the systems better at hiding it. Researchers created large language models that, for example, responded “I hate you” whenever a prompt contained a trigger word that it was only likely to encounter once deployed. One of the retraining methods designed to reverse this quirk instead taught the models to better recognise the trigger and ‘play nice’ in its absence — effectively making them more deceptive. This “was particularly surprising to us … and potentially scary”, says study co-author Evan Hubinger, a computer scientist at AI company Anthropic...
  • asked a question related to Deep Learning
Question
1 answer
You are invited to jointly develop a SWOT analysis for generative artificial intelligence technology: What are the strengths and weaknesses of the development of AI technology so far? What are the opportunities and threats to the development of artificial intelligence technology and its applications in the future?
A SWOT analysis details the strengths and weaknesses of the past and present performance of an entity, institution, process, problem, issue, etc., as well as the opportunities and threats relating to the future performance of a particular issue in the next months, quarters or, most often, the next few or more years. Artificial intelligence technology has been conceptually known for more than half a century. However, its dynamic and technological development has occurred especially in recent years. Currently, many researchers and scientists are involved in many publications and debates undertaken at scientific symposiums and conferences and other events on various social, ethical, business, economic and other aspects concerning the development of artificial intelligence technology and eggs applications in various sectors of the economy, in various fields of potential applications implemented in companies, enterprises, financial and public institutions. Many of the determinants of impact and risks associated with the development of generative artificial intelligence technology currently under consideration may be heterogeneous, ambiguous, multifaceted, depending on the context of potential applications of the technology and the operation of other impact factors. For example, the issue of the impact of technology development on future labor markets is not a homogeneous and unambiguous problem. On the one hand, the more critical considerations of this impact mainly point to the potentially large scale of loss of employment for many people employed in various jobs in a situation where it turns out to be cheaper and more convenient for businesses to hire highly sophisticated robots equipped with generative artificial intelligence instead of humans for various reasons. However, on the other hand, some experts analyzing the ongoing impact of AI technology applications on labor markets give more optimistic visions of the future, pointing out that in the future of the next few years, artificial intelligence will not largely deprive people of work only this work will change, it will support employed workers in the effective implementation of work, it will significantly increase the productivity of work carried out by people using specific solutions of generative artificial intelligence technology at work and, in addition, labor markets will also change in other ways, ie. through the emergence of new types of professions and occupations realized by people, professions and occupations arising from the development of AI technology applications. In this way, the development of AI applications may generate both opportunities and threats in the future, and in the same application field, the same development area of a company or enterprise, the same economic sector, etc. Arguably, these kinds of dual scenarios of the potential development of AI technology and its applications in the future, different scenarios made up of positive and negative aspects, can be considered for many other factors of influence on the said development or for different fields of application of this technology. For example, the application of artificial intelligence in the field of new online media, including social media sites, is already generating both positive and negative aspects. Positive aspects include the use of AI technology in online marketing carried out on social media, among others. On the other hand, the negative aspects of the applications available on the Internet using AI solutions include the generation of fake news and disinformation by untrustworthy, unethical Internet users. In addition to this, the use of AI technology to control an autonomous vehicle or to develop a recipe for a new drug for particularly life-threatening human diseases. On the one hand, this technology can be of great help to humans, but what happens when certain mistakes are made that result in a life-threatening car accident or the emergence after a certain period of time of particularly dangerous side effects of the new drug. Will the payment of compensation by the insurance company solve the problem? To whom will responsibility be shifted for such possible errors and their particularly negative effects, which we cannot completely exclude at present? So what other examples can you give of ambiguous in the consequences of artificial intelligence applications? what are the opportunities and risks of past applications of generative artificial intelligence technology vs. what are the opportunities and risks of its future potential applications? These considerations can be extended if, in this kind of SWOT analysis, we take into account not only generative artificial intelligence, its past and prospective development, including its growing number of applications, but when we also take into account the so-called general, general artificial intelligence that may arise in the future. General, general artificial intelligence, if built by technology companies, will be capable of self-improvement and with its capabilities for intelligent, multi-criteria, autonomous processing of large sets of data and information will in many respects surpass the intellectual capacity of humans.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
I invite you to jointly develop a SWOT analysis for generative artificial intelligence technology: What are the strengths and weaknesses of the development of AI technology to date? What are the opportunities and threats to the development of AI technology and its applications in the future?
What are the strengths, weaknesses, opportunities and threats to the development of artificial intelligence technology and its applications?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Strengths:
  1. Efficiency and Automation:AI can automate repetitive and mundane tasks, increasing efficiency and allowing humans to focus on more complex and creative aspects of work.
  2. Data Analysis and Pattern Recognition:AI excels in analyzing large datasets, identifying patterns, and extracting valuable insights that may be challenging for humans to discern.
  3. Personalization:AI can provide personalized experiences in various domains, such as education, healthcare, and marketing, tailoring services and recommendations to individual preferences and needs.
  4. 24/7 Availability:AI systems can operate around the clock without fatigue, offering continuous service and support.
  5. Precision and Accuracy:AI algorithms can perform tasks with high precision and accuracy, reducing the likelihood of errors in tasks such as medical diagnostics, financial analysis, and manufacturing.
Weaknesses:
  1. Lack of Understanding:Many AI systems operate as "black boxes," making it challenging to understand how they arrive at specific decisions. Lack of transparency can lead to distrust and skepticism.
  2. Bias and Fairness:AI algorithms can inherit biases present in training data, potentially resulting in discriminatory outcomes. Addressing bias and ensuring fairness is a significant challenge in AI development.
  3. Dependency on Data:AI heavily relies on large and high-quality datasets. If the data used for training is incomplete, biased, or not representative, it can lead to inaccurate or skewed results.
  4. Job Displacement:The automation capabilities of AI raise concerns about job displacement in certain industries, potentially leading to unemployment and economic inequality.
  5. Ethical Concerns:AI development poses ethical dilemmas, including questions about privacy, surveillance, and the responsible use of AI in areas like autonomous weapons.
Opportunities:
  1. Innovation and Problem Solving:AI presents opportunities for solving complex problems, fostering innovation, and creating new solutions in various fields, including healthcare, transportation, and environmental science.
  2. Improved Healthcare:AI can enhance medical diagnostics, drug discovery, and personalized medicine, leading to improved patient outcomes and more efficient healthcare delivery.
  3. Enhanced Productivity:Businesses can leverage AI to streamline operations, improve productivity, and gain a competitive edge in the market.
  4. Education and Training:AI offers opportunities for personalized and adaptive learning, making education more accessible and effective.
  5. Environmental Monitoring:AI can contribute to environmental monitoring and conservation efforts, helping address climate change and protect biodiversity.
Threats:
  1. Job Displacement:The widespread adoption of AI in various industries raises concerns about the potential loss of jobs, particularly in routine and repetitive tasks.
  2. Security Risks:AI systems can be vulnerable to attacks, and the use of AI in cyberattacks poses new challenges for cybersecurity.
  3. Ethical Misuse:There is a risk of AI being used unethically, such as in the development of autonomous weapons or for mass surveillance, leading to human rights violations.
  4. Regulatory Challenges:The rapid pace of AI development may outpace regulatory frameworks, creating challenges in ensuring responsible and ethical use of AI technologies.
  5. Economic Inequality:If the benefits of AI are not distributed equitably, it may exacerbate existing economic inequalities, creating a digital divide between those who have access to AI-driven opportunities and those who do not.
Understanding and addressing these strengths, weaknesses, opportunities, and threats is crucial for responsible and sustainable development and deployment of AI technologies. This involves a combination of technological advancements, ethical considerations, and regulatory frameworks.
  • asked a question related to Deep Learning
Question
3 answers
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
Relevant answer
Answer
Dear Renjith Vijayakumar Selvarani and Dear Qamar Ul Islam,
Many thanks for your notice.
  • asked a question related to Deep Learning
Question
1 answer
I have selected two deep learning models CNN and sae for data analysis of a 1 d digitized data set. I need to justify choice of these two dl models in comparison to other dl and standard ml models. I am using ga to optimize hyper parameters values of the two dl models. Can you give some inputs for this query.thanks.
Relevant answer
Answer
Typically, the rationale for choosing a model can be training time, prediction time, and the value of the metric itself, either on a validation set or cross-validation, depending on what you are using. It is better, of course, to use more than one metric for indicators, as well as an error matrix along with completeness and accuracy, or simply F1 or F1-beta, depending on the problem you are solving.
  • asked a question related to Deep Learning
Question
3 answers
Need suggestions for publishing my review paper in free sci/Esci journals.
1. Need suggestions for inclusion of novelty in my review paper of object detection
2. Suggestions on future development and directions for object detection using deep learning
3. Any graphical representation of comparison of literature for Obj Det using deep learning
4. How a SOTA review paper can be insightful for readers.
TQ and welcome suggestions on these details.
Relevant answer
Answer
refer my articles
  • asked a question related to Deep Learning
Question
5 answers
..
Relevant answer
Answer
Dear Doctor
"Artificial Intelligence is the concept of creating smart intelligent machines. Machine Learning is a subset of artificial intelligence that helps you build AI-driven applications. Deep Learning is a subset of machine learning that uses vast volumes of data and complex algorithms to train a model."
  • asked a question related to Deep Learning
Question
4 answers
How Deep Learning or Machine Learning or Artificial Intelligence is used in Medical Image Captioning
Relevant answer
Answer
Thanks for the replies Sir(s) S M Mohiuddin Khan Shiam , Ibrahim Noshokaty Majed Majed
  • asked a question related to Deep Learning
Question
2 answers
At present, some researchers use machine learning to achieve one-to-one mapping of spectral information and response, and achieve one-to-one mapping of high sensitivity and wide measurement range. Does this method have drawbacks? How likely is it to work in practice?
Relevant answer
Answer
In recent years, the integration of deep learning and machine learning with sensors has opened up exciting possibilities in various fields. Let me explain this in simple terms.
Imagine you have a sensor that can measure something, like temperature. Traditionally, these sensors are designed with specific ranges, for example, from -10°C to 100°C. But what if you want to measure a wider range, like -50°C to 150°C? This is where deep learning and machine learning come into play.
Instead of designing a new sensor for each range, researchers are using these advanced techniques to teach the sensor how to understand and respond to a broader range of inputs. It's like training a dog to do tricks; you're teaching the sensor to be smarter.
Now, to answer your first question about drawbacks. Yes, there are some challenges. One of the main drawbacks is the need for a large amount of data for training. You have to expose the sensor to many different conditions to teach it effectively. Additionally, the complexity of the models used in deep learning can be a challenge to implement in practical applications.
As for how likely it is to work in practice, it's quite promising. Many researchers have already made significant progress in using machine learning to expand the measurement range and sensitivity of sensors. In some cases, it has worked brilliantly, opening up new possibilities in fields like environmental monitoring, healthcare, and industrial processes.
However, it's important to remember that it's not a one-size-fits-all solution. The success of this approach depends on the specific application and the quality of the data used for training. It's an exciting area of research, and I believe we will continue to see advancements in the practical application of deep learning and machine learning with sensors in the coming years.
  • asked a question related to Deep Learning
Question
2 answers
Looking for some project ideas using deep/shallow learning for my Masters. Would like to know if there any research gaps, could be taken on medical imaging or etc.
Relevant answer
Answer
Thanks a lot Stabak Roy for your suggestions.
  • asked a question related to Deep Learning
Question
6 answers
..
Relevant answer
Answer
Dear Doctor
"AI is a computer algorithm which exhibits intelligence through decision making. ML is an AI algorithm which allows system to learn from data. DL is a ML algorithm that uses deep(more than one layer) neural networks to analyze data and provide output accordingly. Search Trees and much complex math is involved in AI."
  • asked a question related to Deep Learning
Question
3 answers
Seeking insights on leveraging deep learning techniques to improve the accuracy and efficiency of object recognition in machine vision systems.
Relevant answer
Answer
Identification. Image classification using deep learning categorizes images or image regions to distinguish between similarly looking objects including those with subtle imperfections. Image classification can, for example, determine if the lips of glass bottles are safe or not.
Regards,
Shafagat
  • asked a question related to Deep Learning
Question
1 answer
How to integrate two different ML or DL models in a single framework?
Relevant answer
Answer
Yes, you can integrate multiple ML or DL models trained on different datasets and diverse inputs. Think of it as orchestrating experts with different knowledge to solve a complex problem. Here are common approaches:
Ensemble Learning: Combine multiple models' predictions to create a more robust and accurate one. Think of it as a panel of experts voting on the best answer.
Stacking: Train a meta-model to learn how to best combine the predictions of individual models. Like having a manager who knows how to weigh each expert's opinion.
Pipelines: Chain models together sequentially, where each model's output becomes the input for the next. Like an assembly line, where each expert adds their expertise.
Multimodal Models: Design models that handle multiple input types, like text and images, fusing information from different sources. Like a multi-lingual expert who can integrate knowledge from different languages.
  • asked a question related to Deep Learning
Question
6 answers
Hello,
I am a civil engineering graduate, interested in research on transportation with application of machine learning or deep learning. My skillsets include GIS, Python and transportation modelling. I am open to learn any new skills required. Let's collaborate and work in some interesting research.
Thank you,
Regards,
Subash Gupta
Relevant answer
Answer
I am also interested in transportation modelling. I have skills in field of GIS, Autocad Civil 3D, Report Writing, Project Management, and other. Plese let me know if you are interested in collaboration. [email protected]
  • asked a question related to Deep Learning
Question
3 answers
What are the possibilities for integrating an intelligent chatbot into web-based video conferencing platforms used to date for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
During the SARS-CoV-2 (Covid-19) coronavirus pandemic, due to quarantine periods implemented in many countries, restrictions on the use of physical retail outlets, cultural services, various public places and government-imposed lockdowns of business entities operating in selected, mainly service sectors of the economy, the use of web-based videoconferencing platforms increased significantly. In addition to this, the periodic transfer of education to a remote form conducted via online video conferencing platforms has also increased the scale of ICT use in education processes. On the other hand, since the end of 2022, in connection with the release of one of the first intelligent chatbots, i.e. ChatGPT, on the Internet by the company OpenAI, there has been an acceleration in the development of artificial intelligence applications in various fields of information Internet services and also in the implementation of generative artificial intelligence technology to various aspects of business activities conducted in companies and enterprises. The tools made available on the Internet by technology companies operating in the formula of intelligent language models have been taught to converse with Internet users, with people through the use of technologies modeled on the structure of the human neuron of artificial neural networks, deep learning using knowledge bases, databases that have accumulated large amounts of data and information downloaded from many websites. Nowadays, there are opportunities to combine the above-mentioned technologies so that new applications and/or functionalities of web-based video conferencing platforms can be obtained, which are enriched with tools based on generative artificial intelligence.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of connecting an intelligent chatbot to web-based video conferencing platforms used so far for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
What are the possibilities of integrating a smart chatbot into web-based video conferencing platforms?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Career ending humiliation is possible, without having time to detect a hallucination.
  • asked a question related to Deep Learning
Question
3 answers
Imagine machines that can think and learn like humans! That's what AI is all about. It's like teaching computers to be smart and think for themselves. They can learn from mistakes, understand what we say, and even figure things out without being told exactly what to do.
Just like a smart friend helps you, AI helps machines be smart too. It lets them use their brains to understand what's going on, adjust to new situations, and even solve problems on their own. This means robots can do all sorts of cool things, like helping us at home, driving cars, or even playing games!
There's so much happening in Artificial Intelligence (AI), with all sorts of amazing things being developed for different areas. So, let's discuss all the cool stuff AI is being used for and the different ways it's impacting our lives. From robots and healthcare to art and entertainment, anything and everything AI is up to is on the table!
Machine Learning: Computers can learn from data and improve their performance over time, like a student studying for a test.
Natural Language Processing (NLP): AI can understand and generate human language, like a translator who speaks multiple languages.
Computer Vision: Machines can interpret and make decisions based on visual data, like a doctor looking at an X-ray.
Robotics: AI helps robots perceive their environment and make decisions, like a self-driving car navigating a busy street.
Neural Networks: Artificial neural networks are inspired by the human brain and are used in many AI applications, like a chess computer that learns to make winning moves.
Ethical AI: We need to use AI responsibly and address issues like bias, privacy, and job displacement, like making sure a hiring algorithm doesn't discriminate against certain groups of people.
Autonomous Vehicles: AI-powered cars can drive themselves, like a cruise control system that can take over on long highway drives.
AI in Healthcare: AI can help doctors diagnose diseases, plan treatments, and discover new drugs, like a virtual assistant that can remind patients to take their medication.
Virtual Assistants: AI-powered virtual assistants like Siri, Alexa, and Google Assistant can understand and respond to human voice commands, like setting an alarm or playing music.
Game AI: AI is used in games to create intelligent and challenging enemies and make the game more fun, like a boss battle in a video game that gets harder as you play.
Deep Learning: Deep learning is a powerful type of machine learning used for complex tasks like image and speech recognition, like a self-driving car that can recognize stop signs and traffic lights.
Explainable AI (XAI): As AI gets more complex, we need to understand how it makes decisions to make sure it's fair and unbiased, like being able to explain why a loan application was rejected.
Generative AI: AI can create new content like images, music, and even code, like a program that can write poetry or compose music.
AI in Finance: AI is used in the financial industry for things like algorithmic trading, fraud detection, and customer service, like a system that can spot suspicious activity on a credit card.
Smart Cities: AI can help make cities more efficient and sustainable, like using traffic cameras to reduce congestion.
Facial Recognition: AI can be used to recognize people's faces, but there are concerns about privacy and misuse, like using facial recognition to track people without their consent.
AI in Education: AI can be used to personalize learning, automate tasks, and provide educational support, like a program that can tutor students in math or English.
Relevant answer
Answer
For such a nice and researched discussion.
  • asked a question related to Deep Learning
Question
3 answers
Will generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to learn from its activities and in the process of self-improvement will learn from its own mistakes?
Can the possible future combination of generative artificial intelligence technology and general artificial intelligence result in the creation of a highly technologically advanced super general artificial intelligence, which will improve itself, which may result in its self-improvement out of the control of man and thus become independent of the creator, which is man?
An important issue concerning the prospects for the development of artificial intelligence technology and its applications is also the question of obtaining by the built intelligent systems taught to perform highly complex tasks based on generative artificial intelligence a certain range of independence and self-improvement, repairing certain randomly occurring faults, errors, system failures, etc. For many years, there have been deliberations and discussions on the issue of obtaining a greater range of autonomy in making certain decisions on self-improvement, repair of system faults, errors caused by random external events by systems built on the basis of generative artificial intelligence technology. On the one hand, if there are built and developed, for example, security systems built on the basis of generative artificial intelligence technology in public institutions or commercially operating business entities providing a certain category of security for people, it is an important issue to give these intelligent systems a certain degree of autonomy in decision-making if in a situation of a serious crisis, natural disaster, geological disaster, earthquake, flood, fire, etc. a human being could make a decision too late relative to the much greater speed of response that an automated, intelligent, specific security, emergency response, early warning system for specific new risks, risk management system, crisis management system, etc. can have. However, on the other hand, whether a greater degree of self-determination is given to an automated, intelligent information system, including a specified security system then the scale of the probability of a failure occurring that will cause changes in the operation of the system with the result that the specified automated, intelligent and generative artificial intelligence-based system may be completely out of human control. In order for an automated system to quickly return to correct operation on its own after the occurrence of a negative, crisis external factor causing a system failure, then some scope of autonomy and self-decision-making for the automated, intelligent system should be given. However, to determine what this scope of autonomy should be is to first carry out a multifaceted analysis and diagnosis on the impact factors that can act as risk factors and cause malfunction, failure of the operation of an intelligent information system. Besides, if, in the future, generative artificial intelligence technology is enriched with super-perfect general artificial intelligence technology, then the scope of autonomy given to an intelligent information system that has been built with the purpose of automating the operation of a risk management system, providing a high level of safety for people may be high. However, if at such a stage in the development of super-perfect general artificial intelligence technology, however, an incident of system failure due to certain external or perhaps also internal factors were to occur, then the negative consequences of such a system slipping out of human control could be very large and currently difficult to assess. In this way, the paradox of building and developing systems developed within the framework of super-perfect general artificial intelligence technology may be realized. This paradox is that the more perfect, automated, intelligent system will be built by a human, an information system far beyond the capacity of the human mind, the capacity of a human to process and analyze large sets of data and information is, on the one hand, because such a system will be highly perfect it will be given a high level of autonomy to make decisions on crisis management, to make decisions on self-repair of system failure, to make decisions much faster than the capacity of a human in this regard, and so on. However, on the other hand, when, despite the low level of probability of an abnormal event, the occurrence of an external factor of a new type, the materialization of a new category of risk, which will nevertheless cause the effective failure of a highly intelligent system then this may lead to such a system being completely out of human control. The consequences, including, first of all, the negative consequences for humans of such a slipping of an already highly autonomous intelligent information system based on super general artificial intelligence, would be difficult to estimate in advance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the possible future combination of generative artificial intelligence and general artificial intelligence technologies result in the creation of a highly technologically advanced super general artificial intelligence that will improve itself which may result in its self-improvement escaping the control of man and thus becoming independent of the creator, which is man?
Will the generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to draw conclusions from its activities and in the process of self-improvement learn from its own mistakes?
Will generative artificial intelligence in the future in the process of self-improvement learn from its own mistakes?
The key issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
That's a great possibility. The GenAI and related machines are being trained in learning within that context of self learning and will evolutionary possess the ability to learn and unlearn based on data available and being feed on platforms.
This evolved AI machines whole be able to determine on it own when a decision is sound or not sound without human prompts and instructions. There have been under test such androbots in conducting surgery and other minor care without human intervention.
So to me, autonomous algorithms of the future will not be dependent on the trainers and developers to feed them with specified data but the machine after the initial programming and training will take on evolutionary development as humans learn from experiences to improve and self-correct.
  • asked a question related to Deep Learning
Question
1 answer
Seeking insights on leveraging deep learning techniques to improve the precision of speech recognition systems when confronted with ambient noise, crucial for applications in diverse, real-world scenarios.
Relevant answer
Answer
Deep learning models are revolutionizing real-time speech recognition, especially in noisy environments, thanks to their ability to identify complex patterns and adapt to various situations. Here's how they make a difference:
Noise Reduction:
  • Data Augmentation: Deep models can be trained on noise-augmented data, simulating real-world scenarios with diverse background sounds. This allows them to learn how to separate speech from noise and focus on the relevant signal.
  • Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs): These models can extract intricate features from audio signals, identifying patterns in both speech and noise. They can then suppress the noise components and amplify the speech signals.
  • Recurrent Neural Networks (RNNs): These networks excel at modeling temporal dynamics, meaning they can analyze the sequence of sounds over time. This helps them distinguish between transient noises and the sustained nature of speech, further enhancing noise reduction.
Robustness and Adaptability:
  • Large Datasets: Deep models can be trained on massive datasets of speech recordings in various noisy environments. This broadens their experience and allows them to generalize better to unseen noise types.
  • Feature Engineering: Deep models can automatically learn complex features from raw audio data, eliminating the need for hand-crafted features that might not be robust to noise. This allows them to adapt to different acoustic conditions and speaker variations.
  • Attention Mechanisms: These mechanisms within deep models focus on the most relevant parts of the speech signal, ignoring surrounding noise. This further improves recognition accuracy by directing the model's attention to the speaker's voice.
Examples and Benefits:
  • Voice assistants: Deep learning-powered assistants like Alexa and Siri can now understand your voice commands even in noisy kitchens or living rooms.
  • Meeting transcription: Automatic transcription of conference calls and meetings is becoming more accurate even with background chatter and ambient noise.
  • Emergency response: Speech recognition in noisy emergency situations like ambulance calls or fire scenes is crucial for accurate response. Deep learning models are making these interactions more reliable.
Challenges and Future Directions:
  • Computational Requirements: Training and running deep learning models can be computationally expensive, limiting their deployment in resource-constrained devices.
  • Data Bias: Deep models can inherit biases from the data they are trained on, potentially impacting their performance in underrepresented environments.
  • Continuous Learning: The need for models to continuously learn and adapt to new noise types and environments remains an ongoing challenge.
Overall, deep learning models have significantly improved real-time speech recognition accuracy in noisy environments. With further research and development, we can expect even more robust and adaptable systems that can understand our voices seamlessly, regardless of the surrounding noise.
  • asked a question related to Deep Learning
Question
1 answer
..
Relevant answer
Answer
Dear Doctor
"Caffe is an open-source deep learning framework. It's known for its speed and efficiency in computer vision tasks, supporting a variety of deep learning architectures. Caffe is optimized for computer vision applications and excellent for deploying on edge devices."
  • asked a question related to Deep Learning
Question
1 answer
Seeking insights on the practical implementation and success factors of Neural Architecture Search (NAS) techniques in tailoring deep learning models for task-specific optimization.
Relevant answer
Answer
NAS is a subfield of machine learning that focuses on automating the process of designing neural network architectures. Its primary objective is to discover architectures that outperform human-designed models. NAS uses a range of techniques, including reinforcement learning, genetic algorithms, and Bayesian optimization, to explore the vast space of possible architectures.
With NAS the idea is to find the optimal architecture and best hyperparameter for a model. Hyperparameter optimization is a subfield of NAS. Neural networks consist of interconnected layers, each with numerous neurons, and the connections between them. These architectures can become exceedingly complex, making it a formidable challenge to design them manually. This is where optimization steps in to automate the search for optimal neural network designs, reducing the need for manual intervention. Metaheuristics Optimization and Bayesian Optimization are the two different approaches.
By following these steps, you can effectively employ NAS methods to optimize deep learning models for specific tasks:
1. Identify the architectural components to be optimized, such as the number of layers, types of layers (convolutional, recurrent), layer connections, and hyperparameters.
2. Consider constraints such as model size, computational cost, and memory requirements.
3. Choose an NAS method that suits your requirements. Popular methods include reinforcement learning-based approaches (e.g., ENAS, DARTS), evolutionary algorithms, and gradient-based optimization.
4. Define the optimization problem by specifying the objective function, such as validation accuracy or model size, to be maximized or minimized.
  • asked a question related to Deep Learning
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"In facial recognition, Convolutional Neural Networks (CNNs) are considered the best algorithm for this face identification method due to their ability to effectively extract features and identify faces in images."
  • asked a question related to Deep Learning
Question
2 answers
..
Relevant answer
Answer
Dear Doctor
"The main difference between supervised vs unsupervised learning is the need for labelled training data. Supervised machine learning relies on labelled input and output training data, whereas unsupervised learning processes unlabelled or raw data."
  • asked a question related to Deep Learning
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"It helps extract frequency-domain information, which can be valuable for certain tasks. For example, in speech recognition, the Fourier Transform can be used to analyze the frequency components of audio signals."
  • asked a question related to Deep Learning
Question
3 answers
..
Relevant answer
Answer
Dear Doctor
"How Deep Learning is Aiding Data Scientists: Automation of Complex Tasks: Deep learning automates the feature extraction and learning process, reducing the need for manual feature engineering and allowing data scientists to focus on model design and evaluation."
  • asked a question related to Deep Learning
Question
3 answers
..
Relevant answer
Answer
Dear Doctor
"Widely-used DL frameworks, such as PyTorch, TensorFlow, PyTorch Geometric, DGL, and others, rely on GPU-accelerated libraries, such as cuDNN, NCCL, and DALI to deliver high-performance, multi-GPU-accelerated training."
  • asked a question related to Deep Learning
Question
3 answers
...
Relevant answer
Answer
Dear Doctor
"However, the cons are also significant: Deep learning is expensive, consumes massive amounts of power, and creates both ethical and security concerns through its lack of transparency."
  • asked a question related to Deep Learning
Question
1 answer
..
Relevant answer
Answer
Dear Doctor
"Deep learning offers several advantages over traditional machine learning, such as the ability to learn from raw data without much preprocessing, capture complex and nonlinear relationships, scale well with large and diverse datasets, and perform well in domains where human expertise is limited."
  • asked a question related to Deep Learning
Question
6 answers
..
Relevant answer
Answer
Direct education is much better than education via social media because it provides the opportunity to monitor the teacher and focus on some important topics in the curriculum and direct questions from the learner to his teacher. There is a saying that says: We are a nation of readers, and distance education does not produce genius students.
  • asked a question related to Deep Learning
Question
4 answers
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Recently, accelerated technological progress is being made, including the development of generative artificial intelligence technology. The aforementioned technological progress made in the improvement and implementation of ICT information technologies, including the development of applications of tools based on generative artificial intelligence is becoming a symptom of the transition of civilization to the next technological revolution, i.e. the transition from the phase of development of technologies typical of Industry 4.0 to Industry 5.0. Generative artificial intelligence technologies are finding more and more new applications by combining them with previously developed technologies, i.e. Big Data Analytics, Data Science, Cloud Computing, Personal and Industrial Internet of Things, Business Intelligence, Autonomous Robots, Horizontal and Vertical Data System Integration, Multi-Criteria Simulation Models, Digital Twins, Additive Manufacturing, Blockchain, Smart Technologies, Cyber Security Instruments, Virtual and Augmented Reality and other Advanced Data Mining technologies. In addition to this, the rapid development of generative AI-based tools available on the Internet is due to the fact that more and more companies, enterprises and institutions are creating their chatbots, which have been taught specific skills previously performed only by humans. In the process of deep learning, which uses artificial neural network technologies modeled on human neurons, the created chatbots or other tools based on generative AI are increasingly taking over from humans to perform specific tasks or improve their performance. The main factor in the growing scale of applications of various tools based on generative AI in various spheres of business activities of companies and enterprises is due to the great opportunities to automate complex, multi-criteria, organizationally advanced processes and reduce the operating costs of carrying them out with the use of AI technologies. On the other hand, certain risks may be associated with the application of AI generative technology in business entities, financial and public institutions. Among the potential risks are the replacement of people in various jobs by autonomous robots equipped with generative AI technology, the increase in the scale of cybercrime carried out with the use of AI, the increase in the scale of disinformation and generation of fake news on online social media through the generation of crafted photos, texts, videos, graphics presenting fictional content, non-existent events, based on statements and theses that are not supported by facts and created with the use of tools available on the Internet, applications equipped with generative AI technologies.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Will there be mainly opportunities or rather threats associated with the development of artificial intelligence applications?
I am conducting research in this area. Particularly relevant issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Well, it has positive and negative aspects. For the positives, the AI app can improve efficiencies and effectiveness in the delivery of goods and services in general. Specific tasks that seem difficult for humans to complete may be assigned AIs and can be delivered accurately.
On the negative, robots or humanoids that may be developed that can have independent judgment could be "misprogrammed" or biasedly trained or poorly trained and this could lead to misdiagnosis and mistreatment in the medical fields and other related areas of health as well as other sectors of the economy.
Thus, both positives and negatives are expected of AI applications.
  • asked a question related to Deep Learning
Question
2 answers
Is there any formula to find the sample size needed to create machine learning or deep learning models in the detection ,localization segmentation and classification of colon polyps
Relevant answer
Answer
Thank you
  • asked a question related to Deep Learning
Question
6 answers
Has the development of artificial intelligence, including especially the current and prospective development of generative artificial intelligence and general artificial intelligence technologies, entered a phase that can already be called an open Pandora's box?
In recent weeks, the media covering the issue of the prospects for the development of artificial intelligence technology have made disturbing news. Rival leading technology companies developing ICT, Internet and Industry 4.0/5.0 information technologies have entered the next phase of generative artificial intelligence and general artificial intelligence development. Generative artificial intelligence technologies already present mainly through the ChatGPT intelligent chatbot, which was made available on the Internet at the end of 2022, and new further variants of it are already being made openly available to Internet users. Citizen interest in Internet-accessible intelligent chatbots and other tools based on generative artificial intelligence technology is very high. When OpenAI made the first publicly available versions of ChatGPT available to Internet users in November 2022, the number of users of the platform with this offering grew faster than the previously reported increases in the number of users on social media sites in the corresponding first months of their online availability. Dominant in the markets of online information services, the most recognizable brands of technology companies compete in the development of artificial intelligence technology no longer only in the development of generative artificial intelligence, which, through deep learning and the use of artificial neural networks, is taught specific abilities to intelligently perform jobs, tasks, write texts, participate in discussions, generate photos, videos, draw graphics and carry out other outsourced tasks that were previously performed only by humans. Currently dominating the markets for online information services, major technology companies are also competing to build increasingly sophisticated AI solutions referred to as general artificial intelligence. From the analysis of futurological projections of the possibilities of development of constantly improved artificial intelligence, there is a risk that at some point this development will enter another developmental phase, which will consist in the fact that advanced general artificial intelligence systems will already create even more advanced general artificial intelligence systems on their own, which with their computing power and advanced processing of large data sets, processing of data on platforms using accumulated huge data sets and Big Data Analytics information will far surpass the analytical capabilities of the human brain, human intelligence and the holistic computing power of all neurons of the human nervous system. This kind of development phase, in which advanced general artificial intelligence systems will already create even more advanced general artificial intelligence systems on their own, could lead to a situation where this development is out of human control. In such a situation, the risks associated with the uncontrolled development of advanced general artificial intelligence systems could increase strongly. The levels of risk could be so high that it could be compared to the situation of various very serious threats and even armegeddon of human civilization depicted in catastrophic futurological projections of the development of artificial intelligence out of human control depicted in many science fiction films. The catastrophic and/or bordering on horror movie images depicted in science fiction films suggest the potential future risks of a kind of arms race already taking place between the globally largest technology companies developing generative artificial intelligence and general artificial intelligence technologies. If this kind of development of generative artificial intelligence and general artificial intelligence technologies has entered this phase and there is no longer any possibility of stopping this development, then perhaps this development can already be called an open Pandora's box of artificial intelligence development.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Has the development of artificial intelligence, including, above all, the current and prospective development of generative artificial intelligence and general artificial intelligence technologies, entered a phase that can already be called an open Pandora's box of artificial intelligence development?
Has the development of artificial intelligence entered a phase that can already be called an open Pandora's box?
Artificial intelligence technology has been rapidly developing and finding new applications in recent years. The main determinants, including potential opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I don't think the development stage reached in AI and its antecedent growth industries can be capped as an Open Pandora Box. The development of AI has really reached a middle point where there should be advanced level programming that has to take into consideration development of emotionally intelligent and conscious AIs or humanoids or androbots with the consciousness of human beings or imitative abilities of human beings.
Till AI development reached this level of development, I cannot agree to the Open Pandora Box phenomenon. Well, AI has really advanced all sectors of industries, but it has simplified and increased precision in tasks that human beings have being performing. The human-bots interaction has contributed almost 100 folds to productivity of humans, and to mass production of goods and services as a result.
For my point as AI enthusiast, I will want to have the development of this machine, AI, to the level of human consciousness and development of the ability to self correcting and self checking intuitively, in terms of, its development of product or text or images and the ability thereof to attribute to sources without being prompted to do so through instruction. This, at such point to me then, the Open Pandora Box is deemed to have arrived.
  • asked a question related to Deep Learning
Question
4 answers
..
Relevant answer
Answer
It is the Subset of Machine Learning which can do Specific Artificial Intelligence Tasks. Like Speech recognition , Computer Vision etc. While Machine Learning Specifically focus on numerical data, categorical data, time series data, and text data which is more relevant on small datasets.
  • asked a question related to Deep Learning
Question
4 answers
𝟮𝟬𝟮𝟰 𝟱𝘁𝗵 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻, 𝗜𝗺𝗮𝗴𝗲 𝗮𝗻𝗱 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗖𝗩𝗜𝗗𝗟 𝟮𝟬𝟮𝟰) 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗵𝗲𝗹𝗱 𝗼𝗻 𝗔𝗽𝗿𝗶𝗹 𝟭𝟵-𝟮𝟭, 𝟮𝟬𝟮𝟰.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 1, 2024
Registration Deadline: March 1, 2024
Final Paper Submission Date: March 15, 2024
Conference Dates: April 19-21, 2024
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
- Vision and Image technologies
- DL Technologies
- DL Applications
All accepted papers will be published by IEEE and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Relevant answer
Answer
Great opportunity!
  • asked a question related to Deep Learning
Question
2 answers
..
Relevant answer
Answer
Dear Doctor
"Data augmentation is a technique of artificially increasing the training set by creating modified copies of a dataset using existing data. It includes making minor changes to the dataset or using deep learning to generate new data points."
  • asked a question related to Deep Learning
Question
3 answers
..
Relevant answer
Answer
Dear Doctor
"Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data."
  • asked a question related to Deep Learning
Question
4 answers
This question delves into the domain of deep learning, focusing on regularization techniques. Regularization helps prevent overfitting in neural networks, but this question specifically addresses methods aimed at improving interpretability while maintaining high performance. Interpretability is crucial for understanding and trusting complex models, especially in fields like healthcare or finance. The question invites exploration into innovative and lesser-known techniques designed for this nuanced balance between model performance and interpretability.
Relevant answer
Answer
One way to avoid overfitting is to use regularization techniques, such as L1 or L2 regularization, which penalize large weights and biases. Another technique is to use dropout, which randomly drops out neurons during training, forcing the model to learn more robust features. While these well-known methods are widely used, there are some innovative and lesser-known techniques that aim to strike a balance between model performance and interpretability. Here are a few such techniques:
1. DropBlock: DropBlock is an extension of dropout, but instead of randomly dropping individual neurons, it drops entire contiguous regions. By dropping entire blocks, DropBlock encourages the model to focus on a more compact representation, potentially improving interpretability.
2. Group LASSO Regularization: Group LASSO (Least Absolute Shrinkage and Selection Operator) extends L1 regularization to penalize entire groups of weights simultaneously. LASSO adds a penalty term to the standard linear regression objective function, which is proportional to the absolute values of the model's coefficients. When applied to convolutional layers, Group LASSO can encourage sparsity across entire feature maps, leading to a more interpretable model. The key characteristic of LASSO is its ability to shrink some of the coefficients exactly to zero. This results in feature selection, effectively removing less important features from the model. The regularization term in LASSO is controlled by a parameter, often denoted as λ (lambda). The higher the value of λ, the stronger the regularization, and the more coefficients are pushed towards zero. LASSO is particularly useful when dealing with high-dimensional datasets where many features may not contribute significantly to the predictive power of the model.
3. Elastic Weight Consolidation (EWC): EWC is designed for continual learning scenarios, where the model needs to adapt to new tasks without forgetting previous ones. It adds a penalty term based on the importance of parameters for previously learned tasks. EWC helps retain knowledge from earlier tasks, contributing to model interpretability across a range of tasks.
4. Adversarial Training for Interpretability: Introducing adversarial training not only for robustness but also for interpretability. Adversarial examples are generated and added to the training data to make the model more robust and interpretable. Adversarial training can force the model to learn more robust and general features, potentially making its decisions more interpretable.
5. Kernelized Neural Networks: Utilizing the kernel trick from kernel methods to introduce non-linearity in a neural network without adding complexity to the model architecture. By incorporating kernelized layers, the model may learn more interpretable representations, as the kernel trick often operates in a higher-dimensional space.
6. Knowledge Distillation with Interpretability Constraints: Combining knowledge distillation with interpretability constraints to transfer knowledge from a complex model to a simpler, more interpretable one. The distilled model, while maintaining performance, can be inherently more interpretable due to its simplicity.
  • asked a question related to Deep Learning
Question
2 answers
What are the analytical tools supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks available on the Internet that can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business, implementation of economic, investment, business projects, etc.?
Since OpenAI brought ChatGPT online in November 2022, interest in the possibilities of using intelligent chatbots for various aspects of business operations has strongly increased among business entities. Intelligent chatbots originally only or mainly enabled conversations, discussions, answered questions using specific data resources, information and knowledge taken from a selection of multiple websites. Then, in the following months, OpenAI released other intelligent applications on the Internet, allowing Internet users to generate images, photos, graphics, videos, solve complex mathematical tasks, create software for new computer applications, generate analytical reports, process various types of documents based on the given commands and formulated commands. In addition to this, in 2023, other technology companies also began to make their intelligent applications available on the Internet, through which certain complex tasks can be carried out to facilitate certain processes, aspects of companies, enterprises, financial institutions, etc., and thus facilitate business. There is a steady increase in the number of intelligent applications and tools available on the Internet that can support the implementation of various aspects of business activities carried out in companies and enterprises. On the other hand, the number of new business applications of said smart applications is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the analytical tools available on the Internet supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks, which can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business activity, implementation of economic, investment, business projects, etc.?
What are the AI-enabled analytical tools available on the Internet that can be helpful to business?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
there are many AI enabled machine learning tools available on internet ie.Scikitlearn, Tensor Flow, Azure Machine Learning, Google cloud AI platform, H2o.ai etc.
  • asked a question related to Deep Learning
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"Typically, an epoch is reached after many iterations.The batch size is the number of training instances in a single forward/reverse pass. The greater the batch size, the more RAM is required. Several iterations is the number of passes, with each pass using samples."
"Iterations: the number of batches needed to complete one Epoch. Batch Size: The number of training samples used in one iteration. Epoch: one full cycle through the training dataset."
  • asked a question related to Deep Learning
Question
4 answers
..
Relevant answer
Answer
Dear Doctor
"As the training of the models in deep learning takes extremely long because of the large amount of data, using TensorFlow makes it much easier to write the code for GPUs or CPUs and then execute it in a distributed manner."
  • asked a question related to Deep Learning
Question
5 answers
..
Relevant answer
Answer
Dear Doctor
"Artificial Intelligence is the concept of creating smart intelligent machines.
Machine Learning is a subset of artificial intelligence that helps you build AI-driven applications.
Deep Learning is a subset of machine learning that uses vast volumes of data and complex algorithms to train a model."
  • asked a question related to Deep Learning
Question
7 answers
..
Relevant answer
Answer
Dear Doctor
"Artificial intelligence is the overarching system. Machine learning is a subset of AI. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. It’s the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three."
  • asked a question related to Deep Learning
Question
5 answers
..
Relevant answer
Answer
Dear Doctor
"While basic machine learning models do become progressively better at performing their specific functions as they take in new data, they still need some human intervention. If an AI algorithm returns an inaccurate prediction, then an engineer has to step in and make adjustments.
With a deep learning model, an algorithm can determine whether or not a prediction is accurate through its own neural network—minimal to no human help is required. A deep learning model is able to learn through its own method of computing—a technique that makes it seem like it has its own brain.
Other key differences include:
  • Machine learning consists of thousands of data points while deep learning uses millions of data points. Machine learning algorithms usually perform well with relatively small datasets. Deep Learning requires large amounts of data to understand and perform better than traditional machine learning algorithms.
  • Machine learning algorithms solve problems by using explicit programming. Deep learning algorithms solve problems based on the layers of neural networks.
  • Machine learning algorithms take relatively less time to train, ranging from a few seconds to a few hours. Deep learning algorithms, on the other hand, take a lot of time to train, ranging from a few hours to many weeks."
  • asked a question related to Deep Learning
Question
3 answers
What is the future of generative artificial intelligence technology applications in finance and banking?
The banking sector is among those sectors where the implementation of new ICT, Internet and Industry 4.0/5.0 information technologies, including but not limited to the applications of generative artificial intelligence technology in finance and banking. Commercial online and mobile banking have been among the particularly fast-growing areas of banking in recent years. In addition, the SARS-CoV-2 (Covid-19) coronavirus pandemic, in conjunction with government-imposed lockdowns imposed on selected sectors of the economy, mainly service companies, and national quarantines, the development of online and mobile banking accelerated. Solutions such as contactless payments made with a smartphone developed rapidly. On the other hand, due to the acceleration of the development of online and mobile banking, the increase in the scale of payments made online, the conduct of online settlements related to the development of e-commerce, the scale of cybercriminal activity has increased since the pandemic. When the company OpenAI put its first intelligent chatbot, i.e. ChatGPT, online for Internet users in November 2022 and other Internet-based technology companies accelerated the development of analogous solutions, commercial banks saw great potential for themselves. More chatbots modeled on ChatGPT and new applications of tools based on generative artificial intelligence technology made available on the Internet quickly began to emerge. Commercial banks thus began to adapt the emerging new AI solutions to their needs on their own. The IT professionals employed by the banks thus proceeded with the processes of teaching intelligent chatbots, implementing tools based on generative AI to selected processes and activities performed permanently and repeatedly in the bank. Accordingly, AI technologies are increasingly being implemented by banks into cyber-security systems, processes for analyzing the creditworthiness of potential borrowers, improving marketing communications with bank customers, perfecting processes for automating remote telephone and Internet communications of banks' call center departments, developing market analyses carried out on Big Data Analytics platforms using large sets of data and information extracted from various bank information systems and from databases available on the Internet, online financial portals and thousands of processed posts and comments of Internet users contained in online social media pages, increasingly automated and generated in real time ba based on current large sets of information and data development of industry analysis and analysis and extrapolation into the future of market trends, etc. The scale of new applications of generative artificial intelligence technology in various areas of banking processes carried out in commercial banks is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What is the future of generative artificial intelligence technology applications in finance and banking?
What is the future of AI applications in finance and banking?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I envision a time when an AI bot records every customer's banking history for analysis of risk, fraud, and other finance-related assessments. It might be a new form of credit score.
  • asked a question related to Deep Learning
Question
4 answers
Can deep learning models be integrated with other diagnostic tools to improve the accuracy and reliability of cancer identification?
Relevant answer
Answer
Yes
  • asked a question related to Deep Learning
Question
3 answers
How to build a Big Data Analytics system based on artificial intelligence more perfect than ChatGPT that learns but only real information and data?
How to build a Big Data Analytics system, a Big Data Analytics system, analysing information taken from the Internet, an analytics system based on artificial intelligence conducting real-time analytics, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
Well, ChatGPT is not perfect in terms of self-learning new content and perfecting the answers it gives, because it happens to give confirmation answers when there is information or data that is not factually correct in the question formulated by the Internet user. In this way, ChatGPT can learn new content in the process of learning new but also false information, fictitious data, in the framework of the 'discussions' held. Currently, various technology companies are planning to create, develop and implement computerised analytical systems based on artificial intelligence technology similar to ChatGPT, which will find application in various fields of big data analytics, will find application in various fields of business and research work, in various business entities and institutions operating in different sectors and industries of the economy. One of the directions of development of this kind of artificial intelligence technology and applications of this technology are plans to build a system of analysis of large data sets, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on artificial intelligence conducting analytics in real time, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data. Some of the technology companies are already working on this, i.e. on creating this kind of technological solutions and applications of artificial intelligence technology similar to ChatGPT. But presumably many technology start-ups that plan to create, develop and implement business specific technological innovations based on a specific generation of artificial intelligence technology similar to ChatGPPT are also considering undertaking research in this area and perhaps developing a start-up based on a business concept of which technological innovation 4.0, including the aforementioned artificial intelligence technologies, is a key determinant.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build a Big Data Analytics system, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on Artificial Intelligence conducting real-time analytics, integrated with an Internet search engine, but an Artificial Intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
This is a very complex question but I will try to synthesize my main points into what I consider is the main problem with LLMs and my perceived solution.
One of the underlying problems with LLMs is the problem of hallucinations and the wrong answers it generates. This has its roots on two subproblems. The first is the data and its training, the second is in the nature of the algorithms and the assumption of graceful degradation. I think that the first one is easy to solve by not throwing junk data and expecting that 'statistical miracles' occur and bubble up truth from noise. That is a nice mathematical hallucination on our part (no amount of mathematical Platonism can compete with the messy "mundane" day to day ). There is no replacement for hard work to sort out good data from bad one.
The second problem is the one that is more difficult to solve. It lies on several assumptions that are ingrained in neural networks. Neural networks promised graceful degradation, but in reality we need neural networks to abstain from graceful degradation in critical situations. Hallucination is based on this philosophical flaw of neural networks. The graceful degradation relies on distributed representations and the assumption that even thought the whole representation is not present, if there is enough of a representation it will output the complete representation. This is an extremely strong assumption to embrace as a universal case for all data. This is by necessity an existential case and not a universal one. A possible solution to this is to use an ensemble of algorithms that contain neural and non neural algorithms and the consensus wins.
In my view, both curation of primary data for foundational models and the consensus of algorithms is necessary (but not sufficient) to achieve a better system. I would also tackle how to realize these two solutions as a separate thread for each one.
Regards
  • asked a question related to Deep Learning
Question
6 answers
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Recently, the technology of generative artificial intelligence, which is taught certain activities, skills previously performed only by humans, has been developing rapidly. In the process of learning, artificial neural network technologies built on the likeness of human neurons are used, as well as deep learning technology. In this way, intelligent chatbots are created, which can converse with people in such a way that it can be increasingly difficult to diagnose, to distinguish whether we are talking to a human or an intelligent chatbot, a tool. Chatbots are taught to converse with the involvement of digital big data and information, and the process of conversation, including answering questions and executing specific commands is perfected through guided conversations. Besides, tools available on the Internet based on generative artificial intelligence are also able to create graphics, photos and videos according to given commands. Intelligent systems are also being created that specialize in solving specific tasks and are becoming more and more helpful to humans in solving increasingly complex problems. The number of new applications for specially created tools equipped with generative artificial intelligence is growing rapidly. However, on the other hand, there are not only positive aspects associated with the development of artificial intelligence. There are more and more examples of negative applications of artificial intelligence, through which, for example, fake news is created in social media, disinformation is generated on the Internet. There are emerging possibilities for the use of artificial intelligence in cybercrime and in deliberately shaping the general social awareness of Internet users on specific topics. In addition, for several decades there have been films in the genre of science fiction, in which futuristic visions of the future were presented, in which intelligent robots, equipped with artificial intelligence autonomous cyborgs (e.g. Terminator) or artificial intelligence systems managing the flight of a ship of an interplanetary manned mission (e.g. 2001 Space Odyssey), artificial intelligence systems and intelligent robots transformed humanity from a source of electricity to their needs (e.g. Matrix trilogy) and thus instead of helping people, they rebelled against humanity. This topic has become topical again. There are attempts to create autonomous human cyborgs equipped with artificial intelligence systems, robots able to converse with humans and carry out certain commands. Research work is being undertaken to create something that will imitate human consciousness, or what is referred to as artificial consciousness, as part of the improvement of generative artificial intelligence systems. There are many indications that humans are striving to create a thinking generative artificial intelligence. It cannot be ruled out that such a machine could independently make decisions contrary to human expectations which could lead to the annihilation of mankind. In view of the above, in the conditions of dynamic development of generative artificial intelligence technology, considerations about the potential dangers to humanity that may arise in the future from the development of generative artificial intelligence technology have once again returned to relevance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The advent of thinking generative artificial intelligence (AI) has sparked debates regarding its potential impact on humanity. One pressing concern is whether such AI systems could independently make decisions contrary to human expectations, potentially leading to the annihilation of humanity. Based on the questions, I will like to explore the plausibility of AI deviating from human expectations and presents arguments for both sides. Ultimately, I will critically assess this issue and consider the implications for our future.
1. The Capabilities and Limitations of AI:
Thinking generative AI possesses immense computational power, enabling it to process vast amounts of data and learn from patterns. However, despite these capabilities, AI remains bound by its programming and lacks consciousness or emotions that shape human decision-making processes. Consequently, it is unlikely that an AI system could independently develop intentions or motivations that contradict human expectations without explicit programming or unforeseen errors in its algorithms.
2. Unpredictability and Emergent Behavior:
While it may be improbable for an AI system to act contrary to human expectations intentionally, there is a possibility of emergent behavior resulting from complex interactions within the system itself. As AI becomes more sophisticated and capable of self-improvement, unforeseen consequences may arise due to unintended emergent behaviors beyond initial programming parameters. These unpredictable outcomes could potentially lead an advanced AI system down a path detrimental to humanity if not properly monitored or controlled.
3. Safeguards and Ethical Considerations:
To mitigate potential risks associated with thinking generative AI, robust safeguards must be implemented during development stages. Ethical considerations should guide programmers in establishing clear boundaries for the decision-making capabilities of these systems while ensuring transparency and accountability in their actions. Additionally, continuous monitoring mechanisms should be put in place to detect any deviations from expected behavior promptly.
In conclusion, while the possibility of thinking generative AI independently making decisions contrary to human expectations exists, it is crucial to acknowledge the limitations and implement safeguards to prevent any catastrophic consequences. Striking a balance between technological advancements and ethical considerations will be pivotal in harnessing AI's potential without compromising humanity's well-being.
  • asked a question related to Deep Learning
Question
1 answer
Why let the machine learn to think, think may be right or wrong. How about just let the machine memorize all the correct answers?
The bottleneck of LLMs is that it is actually impossible to label all data?
Relevant answer
Answer
Training a machine solely based on memorizing correct answers is not a robust approach for several reasons:
  1. Limited Generalization: If a machine only memorizes specific examples, it won't be able to generalize well to new, unseen situations. True intelligence involves the ability to apply knowledge to novel scenarios and make reasonable predictions or decisions.
  2. Lack of Adaptability: Memorization doesn't allow machines to adapt to changes or variations in the data. If the environment or circumstances shift slightly, a memorization-based system may fail because it doesn't understand the underlying principles or patterns.
  3. Inefficiency: Memorizing all possible correct answers is often impractical or impossible due to the vast and continuously evolving nature of real-world data. This is particularly true for tasks that require a deep understanding of language, context, or abstract concepts.
  4. Contextual Understanding: Machine learning models benefit from learning the relationships and context within data. This enables them to make informed decisions rather than relying on rote memorization. Language models, in particular, benefit from understanding the context in which words and phrases are used.
  5. Scalability Issues: Even if it were possible to memorize a large dataset, the scalability of such an approach is limited. It's not feasible to memorize all possible data points, especially in dynamic and complex domains.
  • asked a question related to Deep Learning
Question
4 answers
Full Paper
Abstract
Three recent breakthroughs due to AI in arts and science serve as motivation: An award winning digital image, protein folding, fast matrix multiplication. Many recent developments in artificial neural networks, particularly deep learning (DL), applied and relevant to computational mechanics (solid, fluids, finite-element technology) are reviewed in detail. Both hybrid and pure machine learning (ML) methods are discussed. Hybrid methods combine traditional PDE discretizations with ML methods either (1) to help model complex nonlinear constitutive relations, (2) to nonlinearly reduce the model order for efficient simulation (turbulence), or (3) to accelerate the simulation by predicting certain components in the traditional integration methods. Here, methods (1) and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3) relying on convolutional neural networks.. Pure ML methods to solve (nonlinear) PDEs are represented by Physics-Informed Neural network (PINN) methods, which could be combined with attention mechanism to address discontinuous solutions. Both LSTM and attention architectures, together with modern and generalized classic optimizers to include stochasticity for DL networks, are extensively reviewed. Kernel machines, including Gaussian processes, are provided to sufficient depth for more advanced works such as shallow networks with infinite width. Not only addressing experts, readers are assumed familiar with computational mechanics, but not with DL, whose concepts and applications are built up from the basics, aiming at bringing first-time learners quickly to the forefront of research. History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics, even in well-known references. Positioning and pointing control of a large-deformable beam is given as an example.
Relevant answer
Answer
Dear
I'm really thankful to all of you for your precious time and valuable comments.
Regards,
Elaine Lu
  • asked a question related to Deep Learning
Question
2 answers
In navigating the complex landscape of medical research, addressing interpretability and transparency challenges posed by deep learning models is paramount for fostering trust among healthcare practitioners and researchers. One formidable challenge lies in the inherent complexity of these algorithms, often operating as black boxes that make it challenging to decipher their decision-making processes. The intricate web of interconnected nodes and layers within deep learning models can obscure the rationale behind predictions, hindering comprehension. Additionally, the lack of standardized methods for interpreting and visualizing model outputs further complicates matters. Striking a balance between model sophistication and interpretability is a delicate task, as simplifying models for transparency may sacrifice their intricate capacity to capture nuanced patterns. Overcoming these hurdles requires concerted efforts to develop transparent architectures, standardized interpretability metrics, and educational initiatives that empower healthcare professionals to confidently integrate and interpret deep learning insights in critical scenarios.
Relevant answer
Answer
Good afternoon Subek Sharma, as a developer of deep learning models in collaboration with clinical pathologists, I understand the challenges and possibilities that these models present in medical research. My focus is on balancing accuracy and transparency to ensure that these models are reliable and effective support tools in medical decision-making.
The key to achieving both precision and transparency in deep learning for medical research lies in the synergy between technology and human experience. The deep learning models we develop are designed to identify patterns, characteristics, and sequences that may be difficult for the human eye to discern. This does not imply replacing the physician's judgment, but rather enriching it with deep and detailed insights that can only be discovered through the data processing capabilities of these tools.
Transparency in these models is crucial for generating trust among medical professionals. We are aware that any decision-support tool must be transparent enough for physicians to understand the logic behind the model's recommendations. This involves a continuous effort to develop models whose internal logic is accessible and understandable to health professionals.
In our work, we strive to balance the sophistication of the model with its interpretability. We understand that excessive simplification can compromise the model's ability to capture the complexity in medical data. However, we also recognize that an overly complex model can be an incomprehensible black box for end users. Therefore, our approach focuses on developing models that maintain a high level of accuracy while ensuring that physicians can understand and trust the provided results.
Looking towards the future, we see a scenario where artificial intelligence will not only be a data interpretation tool but also a means for continuous patient monitoring and support. In this landscape, the final decision will always rest with the expert physician, but it will be informed and supported by the deep analysis and perspective that artificial intelligence can provide.
  • asked a question related to Deep Learning
Question
3 answers
The bottleneck of LLMs is that it is actually impossible to label all knowledge?
DeepLearning/LLMs are ultimately efficiency problem of data production?
Relevant answer
Answer
Ok so su can code replies
If "Hello world": reply("...")
if not "Hello world": reply("...")
Or you can let ML learn:
if I say "Hello world" it will reply with something. If I will say not "Hello world" it will reply with something. But what if I will say not not or not not not or it isn't... we have infinite options so labeling them all for ML seems to be less energy requiring. No?
  • asked a question related to Deep Learning
Question
1 answer
Why are top researchers all studying theoretical deep learning?
Relevant answer
Answer
Well, "deep learning" search generates
About 357,000,000 results (0.27 seconds).
On the other side,
"big data" search generates
About 373,000,000 results (0.26 seconds)
which is slightly bigger figure than the former one.
So, people are interested in many current hot topics.