Science topic
Deep Learning - Science topic
Explore the latest questions and answers in Deep Learning, and find Deep Learning experts.
Questions related to Deep Learning
Hey everyone,
I'm writing my master thesis on the impact of artificial intelligence on business productivity.
This study is mainly aimed at those of you who develop AI or use these technologies in your professional environment.
This questionnaire will take no more than 5 minutes to complete, and your participation is confidential!
Thank you in advance for your time and contribution!
To take part, please click on the link below: https://forms.gle/fzzHq4iNqGUiidTWA
How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics?
How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics, so that AI technology serves humanity, so that it does not harm people and does not generate new categories of risks?
Conducting a SWOT analysis of the applications of artificial intelligence technology in business, in the business activities of companies and enterprises, shows that there are both many already and developing many more business applications of the said technology, i.e., many potential development opportunities are recognized in this field of using the achievements of the current fourth and/or fifth technological revolution in various spheres of business activity, as well as there are many risks arising from inappropriate, incompatible with the prevailing social norms, standards of reliable business activity, incompatible with business ethics use of new technologies. Among some of the most recognized negative aspects of improper use of generative artificial intelligence technology is the use of AI-equipped graphic applications available on the Internet that allow for the simple and easy generation of photos, graphics, images, videos and animations that, in the form of very realistically presented images, photos, videos, etc., depict something that never happened in reality, i.e., they graphically present images or videos presenting what could be described as “fictitious facts” in a very professional manner. In this way, Internet users can become disinformation generators in online social media, where they can post the said generated images, photos, videos, etc. with added descriptions, posts, comments, in which the said “fictitious facts” presented in the photos or videos will also be described in an editorially correct manner. Besides, the mentioned descriptions, posts, entries, comments, etc. can also be edited with the help of intelligent chatbots available on the Internet like Chat GPT, Copilot, Gemini, etc. However, misinformation is not the only serious problem as it has significantly intensified after OpenAI released the first versions of ChatGPT chatbot online in November 2021. A new category of technical operational risk associated with the new AI technology applied has emerged in companies and enterprises that implement generative artificial intelligence technology into various spheres of business. In addition, there is a growing scale of risks arising from conflicts of interest between business entities related to not fully regulated copyright issues of works created using applications and information systems equipped with generative artificial intelligence technology. Accordingly, there is a demand for the development of a standard of a kind of digital signature with the help of which works created with the help of AI technology will be electronically signed, so that each such work will be unique, unrepeatable and whose counterfeiting will thus be seriously hampered. However, these are only some of the negative aspects of the developing applications of AI technologies, for which there are no functioning legal norms. In the middle of 2023 and then in the spring of 2024, European Union bodies made public the preliminary versions of the developed legal norms on the proper, business-ethical use of technology in business, which were given the name AI Act. The legal normatives, referred to as the AIAct, contain a number of specific, defined types of AI technology applications deemed inappropriate, unethical, i.e. those that should not be used. The AIAct contains classified according to different levels of negative impact on society various types and specific examples of inappropriate and unethical use of AI technologies in the context of various aspects of business as well as non-business activities. An important issue to consider is the scale of the commitment of technology companies developing AI technologies to respect such regulations so that issues of ethical use of this technology are also defined as much as possible in technological aspects in companies that create, develop and implement these technologies. Besides, in order for AIACT's legal norms, when they come into force, not to be dead, it is necessary to introduce both sanction instruments in the form of specific penalties for business entities that use artificial intelligence technologies unethically, antisocially, contrary to AIAct. On the other hand, it would also be a good solution to introduce a system of rewarding those companies and businesses that make the most proper, pro-social, in accordance with the provisions of the AIAct, fully ethical use of AI technologies. In view of the fact that AIACT is to come into force only in more than 2 years so it is necessary to constantly monitor the development of AI technology, verify the validity of the provisions of AIAct in the face of dynamically developing AI technology, successively amend the provisions of the said legal norms, so that when they come into force they do not turn out to be outdated. In view of the above, it is to be hoped that, despite the rapid technological progress, the provisions on the ethical applications of artificial intelligence technology will be constantly updated and the legal normatives shaping the development of AI technology will be amended accordingly. If AIAct achieves the above-mentioned goals to a significant extent, ethical applications of AI technology should be implemented in the future, and the technology can be referred to as ethical generative artificial intelligence, which is finding new applications.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the development of AI technology be regulated so that this development and its applications are carried out in accordance with the principles of ethics?
How should the development of AI technology be regulated so that this development and its applications are realized in accordance with ethics?
How should the development of AI technology applications be regulated so that it is carried out in accordance with ethics?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
I am trying to apply a machine-learning classifier to a dataset. But the dataset is in the .pcap file extension. How can I apply classifiers to this dataset?
Is there any process to convert the dataset into .csv format?
Thanks,
Which patching method is suitable and practical for training and testing deep learning networks when it comes to patching input data? Is it small patches with overlap, such as 50x50 on individual pixels, or large non-overlapping patches like 256x256?
Is the design of new pharmaceutical formulations through the involvement of AI technology, including the creation of new drugs to treat various diseases by artificial intelligence, safe for humans?
There are many indications that artificial intelligence technology can be of great help in terms of discovering and creating new drugs. Artificial intelligence can help reduce the cost of developing new drugs, can significantly reduce the time it takes to design and create new drug formulations, the time it takes to conduct research and testing, and can thus provide patients with new therapies for treating various diseases and saving lives faster. Thanks to the use of new technologies and analytical methods, the way healthcare professionals treat patients has been changing rapidly in recent times. As scientists manage to overcome the complex problems associated with lengthy research processes, and the pharmaceutical industry seeks to reduce the time it takes to develop life-saving drugs, so-called precision medicine is coming to the rescue. It takes a lot of time to develop, analyze, test and bring a new drug to market. Artificial intelligence technology is particularly helpful in this regard, including reducing the aforementioned time to create a new drug. When creating most drugs, the first step is to synthesize a compound that can bind to a target molecule associated with the disease. The molecule in question is usually a protein, which is then tested for various influencing factors. In order to find the right compound, researchers analyze thousands of potential candidates of different molecules. When a compound that meets certain characteristics is successfully identified, then researchers search through huge libraries of similar compounds to find the optimal interaction with the protein responsible for the specific disease. In contrast, many years of time and many millions of dollars of funding are required to complete this labor-intensive process today. In a situation where artificial intelligence, machine learning and deep learning are involved in this process, then the entire process can be significantly reduced in time, costs can be significantly reduced and the new drug can be brought to the pharmaceutical market faster by pharmaceutical companies. However, can an artificial intelligence equipped with artificial neural networks that has been taught through deep learning to carry out the above-mentioned processes get it wrong when creating a new drug? What if the drug that was supposed to cure a person of a particular disease produces a number of new side effects that prove even more problematic for the patient than the original disease from which it was supposed to be cured? What if the patient dies due to previously unforeseen side effects? Will insurance companies recognize the artificial intelligence's mistake and compensate the family of the deceased patient? Who will bear the legal, financial, ethical, etc. responsibility for such a situation?
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is the design of new pharmaceutical formulations through the involvement of AI technologies, including the creation of new drugs to treat various diseases by artificial intelligence, safe for humans?
Is the creation of new drugs by artificial intelligence safe for humans?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
I want to write theoritical analysis of DL model how it is better than other models. Please any one idea how to write theoritical analysis of DL model or provide any article for referecne or tutorial to write such discussion in paper.
When I am training deep learning model with different architectures, some time I have to change the batch size to prevent from Resource Exhausted Error condition. Is comparing the performance of these models trained in different batch size an issue or not?
I see lot research paper where batch size is defined fixed.
Dear RG group,
We are going to examine different AI models on large datasets of ultrasound focal lesions with definitive (patological examination after surgery in malignant leasions and biopsy and follow up in benign ones) final diagnosis. I am looking for images obtained with different us scanners with application of different image optimisation techniques as eg harmonic imaging, compound ultrasound etc. with or without segmentation.
Thank you in advance for your suggestions,
RZS
The project I'm currently working on aims to create a deep learning model for Human Activity Recognition. I'm focusing on system design and implementation. Could someone please help me by sharing some papers or document links to better understand system design and implementation?
Thank you in advance for your assistance.
Hello Guys!
I need a person who is working in the area, please let's connect.
Area: Manufacturing, additive manufacturing, CNN, mechanical engineering
When a model is trained using a specific dataset with limited diversity in labels, it may accurately predict labels for objects within that dataset. However, when applied to real-time recognition tasks using a webcam, the model might incorrectly predict labels for objects not present in the training data. This poses a challenge as the model's predictions may not align with the variety of objects encountered in real-world scenarios.
- Example: I trained a real-time recognition model for a webcam, where I have classes lc = {a, b, c, ..., m}. The model consistently predicts class lc perfectly. However, when I input a class that doesn't belong to lc, it still predicts something from class lc.
Are there any solutions or opinions that experts can share to guide me further in improving the model?
Thank you for considering your opinion on my problems.
Chalmers in his book: What is this thing called Science? mentions that Science is Knowledge obtained from information. The most important endeavors of science are : Prediction and Explanation of Phenomenon. The emergence of Big (massive) Data leads us to the field of Data Science (DS) with the main focus on prediction. Indeed, data belong to a specific field of knowledge or science (physics, economy, ....).
If DS is able to realize prediction for the field of sociology (for example), to whom the merit is given: Data Scientist or Sociologist?
10.1007/s11229-022-03933-2
#DataScience #ArtificialIntelligence #Naturallanguageprocessing #DeepLearning #Machinelearning #Science #Datamining
Are the texts, graphics, photos, animations, videos, etc. generated by AI applications fully unique, unrepeatable, and the creator using them has full copyright to them?
Are the texts, graphics, photos, animations, videos, poems, stories, reports, etc. generated by ChatGPT and other AI applications fully unique, unrepeatable, creative, and the creator using them has full copyright to them?
Are the texts, graphics, photos, animations, videos, poems, stories, reports, etc. generated by applications based on artificial intelligence technology solutions, generated by applications like ChatGPT and other AI applications fully unique, unrepeatable, creative, and the creator using them has full copyright to them?
As part of today's rapid technological advances, new technologies are being developed for Industry 4.0, including but not limited to artificial intelligence, machine learning, robotization, Internet of Things, cloud computing, Big Data Analytics, etc. The aforementioned technologies are being applied in various industries and sectors. The development of artificial intelligence generates opportunities for its application in various spheres of companies, enterprises and institutions; in various industries and services; improving the efficiency of business operations by increasing the scale of process automation; increasing the scale of business efficiency, increasing the ability to process large sets of data and information; increasing the scale of implementation of new business models based on large-scale automation of manufacturing processes, etc.
However, developing artificial intelligence uncontrollably generates serious risks, such as increasing the scale of disinformation, emerging fake news, including banners, memes containing artificial intelligence crafted photos, graphics, animations, videos presenting "fictitious facts", i.e. in a way that apparently looks very realistic describing, depicting events that never happened. In this way, intelligent but not fully perfect chatbots create so-called hallucinations. Besides, by analogy, just like many other technologies, applications available on the Internet equipped with generative artificial intelligence technology can be used not only in positive but also in negative applications.
On the one hand, there are new opportunities to use generative AI as a new tool to improve the work of computer graphic designers and filmmakers. On the other hand, there are also controversies about the ethical aspects and the necessary copyright regulations for works created using artificial intelligence. Sometimes copyright settlements are not clear-cut. This is the case when it cannot be precisely determined whether plagiarism has occurred, and if so, to what extent. Ambiguity on this issue can also generate various court decisions regarding, for example, the recognition or non-recognition of copyrights granted to individuals using Internet applications or information systems equipped with certain generative artificial intelligence solutions, who act as creators who create a kind of cultural works and/or works of art in the form of graphics, photos, animations, films, stories, poems, etc. that have the characteristics of uniqueness and uniqueness.
However, this is probably not the case since, for example, the company OpenAI may be in serious trouble because of allegations by the editors of the New York Times Journal suggesting that ChatGPT was trained on data and information from, among other things, online news portals run by the editors of the aforementioned journal. Well, in December 2023, the New York Times filed a lawsuit against OpenAI and Microsoft accusing them of illegally using the newspaper's articles to train its chatbots, ChatGPT and Bing. According to the newspaper, the companies used millions of texts in violation of copyright laws, creating a service based on them that competes with the newspaper. The New York Times is demanding billions of dollars in damages.In view of the above, there are all sorts of risks of potentially increasing the scale of influence on public opinion, the formation of the general public consciousness by organizations operating without respect for the law. On the one hand, it is necessary to create digital computerized and standardized tools, diagnostic information systems, to build a standardized system of labels informing users, customers, citizens using certain solutions, products and services that they are the products of artificial intelligence, not man. On the other hand, on the other hand, there should be regulations obliging to inform that a certain service or product was created as a result of work done not by humans, but by artificial intelligence. Many issues concerning the socially, ethically and business-appropriate use of artificial intelligence technology will be normatively regulated in the next few years.
Regulations defining the proper use of artificial intelligence technologies by companies developing applications based on these technologies, making these applications available on the Internet, as well as Internet users, business entities and institutions using intelligent chatbots to improve the operation of certain spheres of economic, business activities, etc., are being processed, enacted, but will come into force only in a few years.
On June 14, 2023, the European Parliament passed a landmark piece of legislation regulating the use of artificial intelligence technology. However, since artificial intelligence technology, mainly generative artificial intelligence, is developing rapidly and the currently formulated regulations are scheduled to be implemented between 2026 and 2027, so on the one hand, operators using this technology have plenty of time to bring their procedures and products in line with the supported regulations. On the other hand, one cannot exclude the scenario that, despite the attempt to fully regulate the development of applications of this technology through the implementation of a law on the proper, safe and ethical use of artificial intelligence, it will again turn out in 2027 that the dynamic technological progress is ahead of the legislative process that rapidly developing technologies are concerned with.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Are the texts, graphics, photos, animations, videos, poems, stories, reports and other developments generated by applications based on artificial intelligence technology solutions, generated by applications such as ChatGPT and other AI applications fully unique, unrepeatable, creative and the creator using them has full copyright to them?
Are the texts, graphics, photos, animations, videos, etc. generated by AI applications fully unique, unrepeatable, creative and the creator using them has full copyright to them?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
How do you become a Machine Learning(ML) and Artificial Intelligence(AI) Engineer? or start research in AI/ML, Neural Networks, and Deep Learning?
Should I pursue a "Master of Science thesis in Computer Science." with a major in AI to become an AI Engineer?
I have a dataset from lung cancer images with 163 samples (2D images). I use the fine-tuning of deep learning algorithms to classify samples, but the validation loss did not decrease. I augmented the data and used dropout, but the validation loss didn't drop. How can I solve this problem?
How to curb the growing scale of disinformation, including social media-generated factoids, deepfakey through the use of generative artificial intelligence technology?
In order to reduce the growing scale of disinformation, including disinformation generated in social media through in the increasing scale of emerging fakenews, deepfakes, disinformation generated through the use of applications available on the Internet based on generative artificial intelligence technology, the just mentioned GAI technology can be used. Constantly improved, taught to carry out new types of activities, tasks and commands, intelligent chatbots and other applications based on generative artificial intelligence technology can be applied to identify instances of disinformation spread primarily in online social media. The aforementioned disinformation is particularly dangerous for children and adolescents, it can significantly affect the world view of the general public's awareness of certain issues, it can affect the formation of development trends of certain social processes, it can affect the results of parliamentary and presidential elections, it can also affect the level of sales of certain types of products and services, and so on. In the absence of a developed institutional system of media control institutions, including the new online media; lack of a developed system of control of the level of objectivity of content directed to citizens in advertising campaigns; lack of consideration of the issue of disinformation analysis by competition and consumer protection institutions; lack of or poorly functioning democracy protection institutions; lack of institutions that reliably take care of a high level of journalistic ethics and media independence, the scale of disinformation of citizens by various groups of influence, including public institutions and commercially operating business entities may be high and may generate high social costs. Accordingly, new technologies of Industry 4.0/5.0, including generative artificial intelligence (GAI) technologies, should be involved in order to reduce the scale of growing disinformation, including the generation of factoids, deepfakes, etc. in social media. The aforementioned GAI technologies can help identify fakenews pseudo-journalistic content, identify photos containing deepfakes, identify factually incorrect content contained in banners, spots and advertising videos published in various media as part of ongoing advertising and promotional campaigns aimed at activating sales of various products and services.
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in an article of my co-authorship:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to curb the growing scale of disinformation, including social media generated factoids, deepfakey through the use of generative artificial intelligence technology?
How to curb disinformation generated in social media using artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
Perhaps in the future - as a result of the rapid technological advances currently taking place and the rivalry of leading technology companies developing AI technologies - a general artificial intelligence (AGI) will emerge. At present, there are unresolved deliberations on the question of new opportunities and threats that may occur as a result of the construction and development of general artificial intelligence in the future. The rapid technological progress currently taking place in the field of generative artificial intelligence in connection with the already high level of competition among technology companies developing these technologies may lead to the emergence of a super artificial intelligence, a strong general artificial intelligence that can achieve the capabilities of self-development, self-improvement and perhaps also autonomy, independence from humans. This kind of scenario may lead to a situation where this kind of strong, super AI or general artificial intelligence is out of human control. Perhaps this kind of strong, super, general artificial intelligence will be able, as a result of self-improvement, to reach a state that can be called artificial consciousness. On the one hand, new possibilities can be associated with the emergence of this kind of strong, super, general artificial intelligence, including perhaps new possibilities for solving the key problems of the development of human civilization. However, on the other hand, one should not forget about the potential dangers if this kind of strong, super, general artificial intelligence in its autonomous development and self-improvement independent of man were to get completely out of the control of man. Probably, whether this will involve mainly new opportunities or rather new dangers for mankind will mainly be determined by how man will direct this development of AI technology while he still has control over this development.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
If general artificial intelligence (AGI) is created, will it involve mainly new opportunities or rather new threats for humanity?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Hi,
I am developing Deep learning model(s) for a binary classification problem. The DL works with a reasonable accuracy. Is there a reliable way to extract features from DL models built with 'Keras' pipeline? It seems that the feature contribution are distributed among several layers.
Thank You,
Partho
In other words, why have improvements to neural networks led to an increase in hyperparameters? Are hyperparameters related to some fundamental flaw of neural networks?
In the actual scenario of federated learning, the problem of heterogeneity is an inevitable challenge, so what can we do to alleviate the challenges caused by these heterogeneities?
Neuronal networks and deep learning algorithms are not appropiate for high safety levels applications (e.g. ADAS/AD). The possible test coverage is to low.
Are there any other approches?
I am inclined to research on EEG classification using ML/DL. The research area seems saturated. Hence, I am confused as to where I can contribute.
I have built a hybrid model for a recognition task that involves both images and videos. However, I am encountering an issue with precision, recall, and F1-score, all showing 100%, while the accuracy is reported as 99.35% ~ 99.9%. I have tested the model on various videos and images (related to the experiment data including seperate data), and it seems to be performing well. Nevertheless, I am confused about whether this level of accuracy is acceptable. In my understanding, if precision, recall, and F1-score are all 100%, the accuracy should also be 100%.
I am curious if anyone has encountered similar situations in their deep learning practices and if there are logical explanations or solutions. Your insights, explanations, or experiences on this matter would be valuable for me to better understand and address this issue.
Noted: An ablation study was conducted based on different combinations. In the model where I am confused, without these additional combinations, accuracy, precision, recall, and F1 score are very low. Also, the loss and validation accuracy are very high on other's combinations.
Thank you.
What role does the complexity of the dataset play in the susceptibility of deep learning models to adversarial perturbations?
I am currently involved in research focused on defects detection within the Additive Manufacturing field, and I am seeking an international conference outside India but in Asian countries such as Singapore, Malaysia, Vietnam, Thailand, and the UAE. Please help me finding it.
The conference should meet the following criteria: Organized by reputable professional societies or bodies including ASME, SPIE, ISME, ISTAM, IMechE, IFToMM, IEEE, APS, ACM, ACS, IOP, Elsevier, Springer, IIAV, AMSI, CSIR laboratories, design society, CIRP, CADA, Japan Society of Kansei Engineering, International Association of Packaging Research Institute, Indian Society of Ergonomics, Usability Matters ORG (UMO), International Ergonomics Association, International Institute of Information Design, Vienna, and listed among the first 500 conferences in the Microsoft Conference Ranking.
What new occupations, professional professions, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
The recent rapid development of generative artificial intelligence applications is increasingly changing labor markets. The development of generative artificial intelligence applications is increasing the scale of objectification of work performed within various professions. On the one hand, generative artificial intelligence technologies are finding more and more applications in companies, enterprises and institutions increasing the efficiency of certain business processes supporting employees working in various positions. However, there are increasing considerations about the possibility of black scenarios coming true in futurological projections suggesting that in the future many jobs will be completely replaced by autonomic AI-equipped robots, androids or systems operating in cloud computing. On the other hand, in opposition to the black scenarios of future developments in labor markets are contrasted with more positive scenarios presenting futuristic projections of the development of labor markets, where new professions will be created thanks to the implementation of generative artificial intelligence technology into various aspects of economic activity. Which of these two scenarios will be realized to a greater extent in the future is currently not easy to predict precisely.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What new professions, professional occupations, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
What new professions will soon be created in connection with the development of generative artificial intelligence applications?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
If man succeeds in building a general artificial intelligence, will this mean that man has become better acquainted with the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
Assuming that if man succeeds in building a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness then perhaps this will mean that man has fully learned the essence of his own intelligence and consciousness. If this happens, what will be the result? Will man first learn the essence of his own intelligence and consciousness and then build a general, general artificial intelligence, i.e. AI technology capable of self-improvement, independent development and perhaps also obtaining a state of artificial consciousness, or vice versa, i.e. first a general artificial intelligence and artificial consciousness capable of self-improvement and development will be created and then thanks to the aforementioned technological progress made from the field of artificial intelligence, man will fully learn the essence of his own intelligence and consciousness. In my opinion, it is most likely that both processes will develop and implement simultaneously on a reciprocal feedback basis.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If man succeeds in building a general artificial intelligence, i.e., AI technology capable of self-improvement, independent development and perhaps also achieving a state of artificial consciousness, will this mean that man has fully learned the essence of his own intelligence and consciousness?
If man succeeds in building a general artificial intelligence, will this mean that man has better learned the essence of his own consciousness?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Researching world model + reinforcement learning, and in the end realize that we still need to label a lot of data?
Hi all,
I am working with NetSim which is an end-to-end, full-stack, packet-level network simulator and emulator to simulate 5G networks, and want to integrate Deep Reinforcement Learning (RL) for my research.
My understanding of NetSim is reasonably good, and I am now looking to apply RL within this framework so as to learn optimal policies.
I have previously worked on a basic DQN model for Power Control and Rate Adaptation in Cellular Networks. I also tried to use RL to find the optimal serving capacity for a data batch arrival problem.
I have two questions:
(i) Any suggestions for RL projects for 5G using NetSim?
(ii) Where in the NetSim code should I start integrating RL algorithms?
I'm doing some research to explore the application of eXplainable Artificial Intelligence (XAI) in the context of brain tumor detection. Specifically, I aim to develop a model that not only accurately detects the presence of brain tumors but also provides clear explanations for its decisions regarding positive or negative results. My main concerns are making sure that the model's decision-making process is transparent and comprehending the underlying reasoning behind its choices. I would be grateful for any thoughts, suggestions, or links to papers or web articles that address the practical application of XAI in this field (including the dataset types or anything that is related with XAi).
Thank you.
Will the combination of AI technology, Big Data Analytics and the high power of quantum computers allow the prediction of multi-faceted, complex macroprocesses?
Will the combination of generative artificial intelligence technology, Big Data Analytics and the high power of quantum computers make it possible to forecast multi-faceted, complex, holistic, long-term economic, social, political, climatic, natural macroprocesses?
Generative artificial intelligence technology is currently being used to carry out various complex activities, to solve tasks intelligently, to implement multi-criteria processes, to create multi-faceted simulations and generate complex dynamic models, to creatively perform manufacturing processes that require processing large sets of data and information, etc., which until recently only humans could do. Recently, there have been attempts to create computerized, intelligent analytical platforms, through which it would be possible to forecast complex, multi-faceted, multi-criteria, dynamically changing macroprocesses, including, first of all, long-term objectively realized economic, social, political, climatic, natural and other macroprocesses. Based on the experience to date from research work on the analysis of the development of generative artificial intelligence technology and other technologies typical of the current Fourth Technological Revolution, technologies categorized as Industry 4.0/5.0, the rapidly developing various forms and fields of application of AI technologies, it is clear that the dynamic technological progress that is currently taking place will probably increase the possibilities of building complex intelligent predictive models for multi-faceted, complex macroprocesses in the years to come. The current capabilities of generative artificial intelligence technology in the field of improving forecasting models and carrying out forecasts of the formation of specific trends within complex macroprocesses are still limited and imperfect. The imperfection of forecasting models may be due to the human factor, i.e., their design by humans, the determination by humans of the key criteria and determinants that determine the functioning of certain forecasting models. In a situation where in the future forecasting models will be designed and improved, corrected, adapted to changing, for example, environmental conditions at each stage by artificial intelligence technology then they will probably be able to be much more perfect than the currently functioning and built forecasting models. Another shortcoming is the issue of data obsolescence and data limitation. There is currently no way to connect an AI-equipped analytical platform to the entire resources of the Internet, taking into account the processing of all the data and information contained in the Internet in real time. Even today's fastest quantum computers and the most advanced Big Data Analytics systems do not have such capabilities. However, it is not out of the question that in the future the dynamic development of generative artificial intelligence technology, the ongoing competition among leading technology companies developing technologies for intelligent chatbots, robots equipped with artificial intelligence, creating intelligent control systems for machines and processes, etc., will lead to the creation of general artificial intelligence, i.e. advanced, general artificial intelligence that will be capable of self-improvement. However, it is important that the said advanced general advanced artificial intelligence does not become fully autonomous, does not become completely independent, does not become out of the control of man, because there would be a risk of this highly advanced technology turning against man which would involve the creation of high levels of risks and threats to man, including the risk of losing the possibility of human existence on planet Earth.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Will the combination of generative artificial intelligence technology, Big Data Analytics and the high power of quantum computers make it possible to forecast multi-faceted, complex, holistic, long-term economic, social, political, climatic, natural macro-processes?
Will the combination of AI technology, Big Data Analytics and high-powered quantum computers allow forecasting of multi-faceted, complex macro-processes?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
To what extent do artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized through Internet marketing, within the framework of social media advertising campaigns?
Among the areas in which applications based on generative artificial intelligence are now rapidly finding application are marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns. More and more advertising agencies are using generative artificial intelligence technology to create images, graphics, animations and videos that are used in advertising campaigns. Thanks to the use of generative artificial intelligence technology, the creation of such key elements of marketing communication materials has become much simpler and cheaper and their creation time has been significantly reduced. On the other hand, thanks to the applications already available on the Internet based on generative artificial intelligence technology that enable the creation of photos, graphics, animations and videos, it is no longer only advertising agencies employing professional cartoonists, graphic designers, screenwriters and filmmakers that can create professional marketing materials and advertising campaigns. Thanks to the aforementioned applications available on the Internet, graphic design platforms, including free smartphone apps offered by technology companies, advertising spots and entire advertising campaigns can be designed, created and executed by Internet users, including online social media users, who have not previously been involved in the creation of graphics, banners, posters, animations and advertising videos. Thus, opportunities are already emerging for Internet users who maintain their social media profiles to professionally create promotional materials and advertising campaigns. On the other hand, generative artificial intelligence technology can be used unethically within the framework of generating disinformation, informational factoids and deepfakes. The significance of this problem, including the growing disinformation on the Internet, has grown rapidly in recent years. The deepfake image processing technique involves combining images of human faces using artificial intelligence techniques.
In order to reduce the scale of disinformation spreading on the Internet media, it is necessary to create a universal system for labeling photos, graphics, animations and videos created using generative artificial intelligence technology. On the other hand, a key factor facilitating the development of this kind of problem of generating disinformation is that many legal issues related to the technology have not yet been regulated. Therefore, it is also necessary to refine legal norms on copyright issues, intellectual property protection that take into account the creation of works that have been created using generative artificial intelligence technology. Besides, social media companies should constantly improve tools for detecting and removing graphic and/or video materials created using deepfake technology.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent does artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns?
How do artificial intelligence technology and other Industry 4.0/5.0 technologies support Internet marketing processes?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
How can artificial intelligence help conduct economic and financial analysis, sectoral and macroeconomic analysis, fundamental and technical analysis ...?
How should one carry out the process of training generative artificial intelligence based on historical economic data so as to build a system that automatically carries out economic and financial analysis ...?
How should the process of training generative artificial intelligence be carried out based on historical economic data so as to build a system that automatically carries out sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges?
Based on relevant historical economic data, can generative artificial intelligence be trained so as to build a system that automatically conducts sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges?
The combination of various analytical techniques, ICT information technologies, Industry 4.0/5.0, including Big Data Analytics, cloud computing, multi-criteria simulation models, digital twins, Business Intelligence and machine learning, deep learning up to generative artificial intelligence, and quantum computers characterized by high computing power, opens up new, broader possibilities for carrying out complex analytical processes based on processing large sets of data and information. Adding generative artificial intelligence to the aforementioned technological mix also opens up new possibilities for carrying out predictive analyses based on complex, multi-factor models made up of various interrelated indicators, which can dynamically adapt to the changing environment of various factors and conditions. The aforementioned complex models can relate to economic processes, including macroeconomic processes, specific markets, the functioning of business entities in specific markets and in the dynamically changing sectoral and macroeconomic environment of the domestic and international global economy. Identified and described trends of specific economic and financial processes developed on the basis of historical data of the previous months, quarters and years are the basis for the development of forecasts of extrapolation of these trends for the following months, quarters and years, taking into account a number of alternative situation scenarios, which can dynamically change over time depending on changing conditions and market and sectoral determinants of the environment of specific analyzed companies and enterprises. In addition to this, the forecasting models developed in this way can apply to various types of sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses carried out for securities priced in the market on stock exchanges. Market valuations of securities are juxtaposed with the results of the fundamental analyses carried out in order to diagnose the scale of undervaluation or overvaluation of the market valuation of specific stocks, bonds, derivatives or other types of financial instruments traded on stock exchanges. In view of the above, opportunities are now emerging in which, based on relevant historical economic data, generative artificial intelligence can be trained so as to build a system that automatically conducts sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Based on relevant historical economic data, is it possible to train generative artificial intelligence so as to build a system that automatically conducts sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges?
How should the process of training generative artificial intelligence based on historical economic data be carried out so as to build a system that automatically carries out sectoral and macroeconomic analyses, economic and financial analyses of business entities, fundamental and technical analyses for securities priced on stock exchanges?
How should one go about training generative artificial intelligence based on historical economic data so as to build a system that automatically conducts economic and financial analyses ...?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Deep learning is a branch of machine learning that uses artificial neural networks to perform complex calculations on large datasets. It mimics the structure and function of the human brain and trains machines by learning from examples. Deep learning is widely used by industries that deal with complex problems, such as health care, eCommerce, entertainment, and advertising.
This post explores the basic types of artificial neural networks and how they work to enable deep learning algorithms.
Hello everyone,
I am currently working on a project related to medical imaging and deep learning. Part of my work involves illustrating these complex concepts through diagrams and figures. However, I'm having a hard time finding a software tool that suits my specific needs.
Ideally, I am looking for a program that allows for high-quality rendering of medical imaging data, can incorporate elements of deep learning such as neural network architectures, and has an intuitive interface that's easy to navigate. It would also be great if the software has a good range of customization options to adjust the look and feel of the diagrams to my preference.
Could anyone suggest software tools they have had a positive experience with in this context? Any guidance on learning resources or tutorials for the suggested software would be greatly appreciated as well.
Thank you in advance for your assistance and suggestions.
Want to get experitise in use of deep learning models for health sciences like heart disease prediction among others. Also want to know which libraries, frameworks, and packages can be used in this regard?
Hello,
I'm writing paper and used various optimizers to train model. I changed them during training step to get out of local minimum, and I know that people do that, but I don't know how to name that technique in the paper. Does it even have a name?
It is like simulated annealing in optimization, but instead of playing with temperature (step) we change optimizers between Adam, SDG and RMSprop. I can say for sure that it gave fantastic results.
P.S. Thank you for replies but learning rate scheduling is for leaning rate changing, optimizer scheduling is for other optimizer parameters, in general it is hyperparameter tuning. What I'm asking is about switching between optimizers, not modifying their parameters.
Thanks for support,
Andrius Ambrutis
How would you address the issue of model interpretability in deep learning, especially when dealing with complex neural network architectures, to ensure transparency and trust in the decision-making process?
Would you choose to participate in a manned mission, space expedition, tourist space trip to Mars in a situation where the spacecraft was controlled by a highly technologically advanced generative artificial intelligence?
The technologically leading companies currently building rockets and other spacecraft have aspirations to build a new generation of spaceplanes and bring intercontinental aviation into the era of intercontinental paracosmic flights taking place near the orbital sphere of planet Earth. On the other hand, the aforementioned leading technology companies are building rockets, satellites and space landers to be sent to Earth's moon and also those to be sent to the planet Mars as well. Manned flights to the Earth's Moon are to be resumed and manned bases are to be built on the Moon in the 2020s perspective of the current 21st century. then manned missions to the planet Mars are to be implemented in the 1930s perspective of the current century. It may also be that in the perspective of the next decades, manned bases will be built on Mars and perhaps there will be colonization of this as yet inaccessible planet for humans. Perhaps in the perspective of the second half of the present century there will already be periodic manned missions, space expeditions, tourist space travel to Mars. If this were to happen, it would not be out of the question that participating such manned missions, space expeditions, tourist space travel to Mars will be carried out using spacecraft that will be largely autonomously controlled with the help of highly technologically advanced generative artificial intelligence.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Would you choose to participate in a manned mission, space expedition, tourist space travel to Mars in a situation where the spacecraft is controlled by a highly technologically advanced generative artificial intelligence?
Would you choose to take part in a tourist space trip to Mars in the situation if the spacecraft was controlled by an artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
How can artificial intelligence technology help in the development and deployment of innovative renewable and zero-carbon energy sources, i.e. hydrogen power, hydrogen fusion power, spent nuclear fuel power, ...?
In view of the above, with the development of renewable and emission-free energy sources there are many technological and environmental constraints on certain categories of spent materials used in this type of energy. On the one hand, it is necessary for power companies to make investments in electricity transmission and storage networks. On the other hand, economical technologies for the production of low-cost energy storage and recycling, disposal of used batteries and photovoltaic panels, including the recovery of rare metals as part of the aforementioned disposal process, are still to be developed. In addition, the problem of overheating of batteries in electric vehicles and the occurrence of situations of spontaneous combustion of these devices and dangerous, difficult to extinguish fires of the said vehicles are still not fully resolved. If the solution to such problems is mainly a matter of necessary improvements in technology or the creation of new, innovative technology, then arguably generative artificial intelligence technology should come to the rescue in this regard.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
Important aspects of the implementation of the green transformation of the economy, including the development of renewable and zero-carbon energy sources I included in my article below:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
I invite you to discuss this important topic for the future of the planet's biosphere and climate.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence technology help in the development and deployment of innovative renewable and carbon-free energy sources, i.e. hydrogen power, hydrogen fusion power, spent nuclear fuel power, ...?
How can artificial intelligence technology help in the development and deployment of renewable and emission-free energy sources?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
I am interested to learn machine learning and Deep learning ,so in this area please suggestions best one
Hello,
I am looking for articles in which the analysis is done with the Machine Learning Models and Neural Networks. Kindly suggest any such articles published within last 1 or 2 years.
Thank You.
To what extent does the ChatGPT technology independently learn to improve the answers given to the questions asked?
To what extent does the ChatGPT consistently and successively improve its answers, i.e. the texts generated in response to the questions asked, over time and when receiving further questions using machine learning and/or deep learning?
If the ChatGPT, with the passage of time and the receipt of successive questions using machine learning and/or deep learning technology, were to continuously and successively improve its answers, i.e. the texts generated as an answer to the questions asked, including the same questions asked, then the answers obtained should, with time, become more and more perfect in terms of content and the scale of errors, non-existent "facts", new but not factually correct "information" created by the ChatGPT in the automatically generated texts should gradually decrease. But has the current, next generation of ChatGPT 4.0 already applied sufficiently advanced, automatic learning to this tool to create ever more perfect texts in which the number of errors should decrease? This is a key question that will largely determine the possibilities for practical applications of this artificial intelligence technology in various fields, human professions, industries and economic sectors. On the other hand, the possibilities of the aforementioned learning process to create better and better answers to the questions asked will become increasingly limited over time if the knowledge base of 2021 used by ChatGPT is not updated and enriched with new data, information, publications, etc. over an extended period of time. In the future, it is likely that such processes of updating and expanding the source database will be carried out. The issue of carrying out such updates and extensions to the source knowledge base will be determined by the technological advances taking place and the increasing pressure on the business use of such technologies.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent does ChatGPT, with the passage of time and the receipt of further questions using machine learning and/or deep learning technology, continuously, successively improve its answers, i.e. the texts generated as a response to the questions asked?
To what extent does the ChatGPT technology itself learn to improve the answers given to the questions asked?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
You are invited to jointly develop a SWOT analysis for generative artificial intelligence technology: What are the strengths and weaknesses of the development of AI technology so far? What are the opportunities and threats to the development of artificial intelligence technology and its applications in the future?
A SWOT analysis details the strengths and weaknesses of the past and present performance of an entity, institution, process, problem, issue, etc., as well as the opportunities and threats relating to the future performance of a particular issue in the next months, quarters or, most often, the next few or more years. Artificial intelligence technology has been conceptually known for more than half a century. However, its dynamic and technological development has occurred especially in recent years. Currently, many researchers and scientists are involved in many publications and debates undertaken at scientific symposiums and conferences and other events on various social, ethical, business, economic and other aspects concerning the development of artificial intelligence technology and eggs applications in various sectors of the economy, in various fields of potential applications implemented in companies, enterprises, financial and public institutions. Many of the determinants of impact and risks associated with the development of generative artificial intelligence technology currently under consideration may be heterogeneous, ambiguous, multifaceted, depending on the context of potential applications of the technology and the operation of other impact factors. For example, the issue of the impact of technology development on future labor markets is not a homogeneous and unambiguous problem. On the one hand, the more critical considerations of this impact mainly point to the potentially large scale of loss of employment for many people employed in various jobs in a situation where it turns out to be cheaper and more convenient for businesses to hire highly sophisticated robots equipped with generative artificial intelligence instead of humans for various reasons. However, on the other hand, some experts analyzing the ongoing impact of AI technology applications on labor markets give more optimistic visions of the future, pointing out that in the future of the next few years, artificial intelligence will not largely deprive people of work only this work will change, it will support employed workers in the effective implementation of work, it will significantly increase the productivity of work carried out by people using specific solutions of generative artificial intelligence technology at work and, in addition, labor markets will also change in other ways, ie. through the emergence of new types of professions and occupations realized by people, professions and occupations arising from the development of AI technology applications. In this way, the development of AI applications may generate both opportunities and threats in the future, and in the same application field, the same development area of a company or enterprise, the same economic sector, etc. Arguably, these kinds of dual scenarios of the potential development of AI technology and its applications in the future, different scenarios made up of positive and negative aspects, can be considered for many other factors of influence on the said development or for different fields of application of this technology. For example, the application of artificial intelligence in the field of new online media, including social media sites, is already generating both positive and negative aspects. Positive aspects include the use of AI technology in online marketing carried out on social media, among others. On the other hand, the negative aspects of the applications available on the Internet using AI solutions include the generation of fake news and disinformation by untrustworthy, unethical Internet users. In addition to this, the use of AI technology to control an autonomous vehicle or to develop a recipe for a new drug for particularly life-threatening human diseases. On the one hand, this technology can be of great help to humans, but what happens when certain mistakes are made that result in a life-threatening car accident or the emergence after a certain period of time of particularly dangerous side effects of the new drug. Will the payment of compensation by the insurance company solve the problem? To whom will responsibility be shifted for such possible errors and their particularly negative effects, which we cannot completely exclude at present? So what other examples can you give of ambiguous in the consequences of artificial intelligence applications? what are the opportunities and risks of past applications of generative artificial intelligence technology vs. what are the opportunities and risks of its future potential applications? These considerations can be extended if, in this kind of SWOT analysis, we take into account not only generative artificial intelligence, its past and prospective development, including its growing number of applications, but when we also take into account the so-called general, general artificial intelligence that may arise in the future. General, general artificial intelligence, if built by technology companies, will be capable of self-improvement and with its capabilities for intelligent, multi-criteria, autonomous processing of large sets of data and information will in many respects surpass the intellectual capacity of humans.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
I invite you to jointly develop a SWOT analysis for generative artificial intelligence technology: What are the strengths and weaknesses of the development of AI technology to date? What are the opportunities and threats to the development of AI technology and its applications in the future?
What are the strengths, weaknesses, opportunities and threats to the development of artificial intelligence technology and its applications?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
I am trying to train a CNN model in Matlab to predict the mean value of a random vector (the Matlab code named Test_2 is attached). To further clarify, I am generating a random vector with 10 components (using rand function) for 500 times. Correspondingly, the figure of each vector versus 1:10 is plotted and saved separately. Moreover, the mean value of each of the 500 randomly generated vectors are calculated and saved. Thereafter, the saved images are used as the input file (X) for training (70%), validating (15%) and testing (15%) a CNN model which is supposed to predict the mean value of the mentioned random vectors (Y). However, the RMSE of the model becomes too high. In other words, the model is not trained despite changing its options and parameters. I would be grateful if anyone could kindly advise.
I have selected two deep learning models CNN and sae for data analysis of a 1 d digitized data set. I need to justify choice of these two dl models in comparison to other dl and standard ml models. I am using ga to optimize hyper parameters values of the two dl models. Can you give some inputs for this query.thanks.
Need suggestions for publishing my review paper in free sci/Esci journals.
1. Need suggestions for inclusion of novelty in my review paper of object detection
2. Suggestions on future development and directions for object detection using deep learning
3. Any graphical representation of comparison of literature for Obj Det using deep learning
4. How a SOTA review paper can be insightful for readers.
TQ and welcome suggestions on these details.
How Deep Learning or Machine Learning or Artificial Intelligence is used in Medical Image Captioning
At present, some researchers use machine learning to achieve one-to-one mapping of spectral information and response, and achieve one-to-one mapping of high sensitivity and wide measurement range. Does this method have drawbacks? How likely is it to work in practice?
Looking for some project ideas using deep/shallow learning for my Masters. Would like to know if there any research gaps, could be taken on medical imaging or etc.
Seeking insights on leveraging deep learning techniques to improve the accuracy and efficiency of object recognition in machine vision systems.
How to integrate two different ML or DL models in a single framework?
Hello,
I am a civil engineering graduate, interested in research on transportation with application of machine learning or deep learning. My skillsets include GIS, Python and transportation modelling. I am open to learn any new skills required. Let's collaborate and work in some interesting research.
Thank you,
Regards,
Subash Gupta
What are the possibilities for integrating an intelligent chatbot into web-based video conferencing platforms used to date for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
During the SARS-CoV-2 (Covid-19) coronavirus pandemic, due to quarantine periods implemented in many countries, restrictions on the use of physical retail outlets, cultural services, various public places and government-imposed lockdowns of business entities operating in selected, mainly service sectors of the economy, the use of web-based videoconferencing platforms increased significantly. In addition to this, the periodic transfer of education to a remote form conducted via online video conferencing platforms has also increased the scale of ICT use in education processes. On the other hand, since the end of 2022, in connection with the release of one of the first intelligent chatbots, i.e. ChatGPT, on the Internet by the company OpenAI, there has been an acceleration in the development of artificial intelligence applications in various fields of information Internet services and also in the implementation of generative artificial intelligence technology to various aspects of business activities conducted in companies and enterprises. The tools made available on the Internet by technology companies operating in the formula of intelligent language models have been taught to converse with Internet users, with people through the use of technologies modeled on the structure of the human neuron of artificial neural networks, deep learning using knowledge bases, databases that have accumulated large amounts of data and information downloaded from many websites. Nowadays, there are opportunities to combine the above-mentioned technologies so that new applications and/or functionalities of web-based video conferencing platforms can be obtained, which are enriched with tools based on generative artificial intelligence.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of connecting an intelligent chatbot to web-based video conferencing platforms used so far for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
What are the possibilities of integrating a smart chatbot into web-based video conferencing platforms?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Imagine machines that can think and learn like humans! That's what AI is all about. It's like teaching computers to be smart and think for themselves. They can learn from mistakes, understand what we say, and even figure things out without being told exactly what to do.
Just like a smart friend helps you, AI helps machines be smart too. It lets them use their brains to understand what's going on, adjust to new situations, and even solve problems on their own. This means robots can do all sorts of cool things, like helping us at home, driving cars, or even playing games!
There's so much happening in Artificial Intelligence (AI), with all sorts of amazing things being developed for different areas. So, let's discuss all the cool stuff AI is being used for and the different ways it's impacting our lives. From robots and healthcare to art and entertainment, anything and everything AI is up to is on the table!
Machine Learning: Computers can learn from data and improve their performance over time, like a student studying for a test.
Natural Language Processing (NLP): AI can understand and generate human language, like a translator who speaks multiple languages.
Computer Vision: Machines can interpret and make decisions based on visual data, like a doctor looking at an X-ray.
Robotics: AI helps robots perceive their environment and make decisions, like a self-driving car navigating a busy street.
Neural Networks: Artificial neural networks are inspired by the human brain and are used in many AI applications, like a chess computer that learns to make winning moves.
Ethical AI: We need to use AI responsibly and address issues like bias, privacy, and job displacement, like making sure a hiring algorithm doesn't discriminate against certain groups of people.
Autonomous Vehicles: AI-powered cars can drive themselves, like a cruise control system that can take over on long highway drives.
AI in Healthcare: AI can help doctors diagnose diseases, plan treatments, and discover new drugs, like a virtual assistant that can remind patients to take their medication.
Virtual Assistants: AI-powered virtual assistants like Siri, Alexa, and Google Assistant can understand and respond to human voice commands, like setting an alarm or playing music.
Game AI: AI is used in games to create intelligent and challenging enemies and make the game more fun, like a boss battle in a video game that gets harder as you play.
Deep Learning: Deep learning is a powerful type of machine learning used for complex tasks like image and speech recognition, like a self-driving car that can recognize stop signs and traffic lights.
Explainable AI (XAI): As AI gets more complex, we need to understand how it makes decisions to make sure it's fair and unbiased, like being able to explain why a loan application was rejected.
Generative AI: AI can create new content like images, music, and even code, like a program that can write poetry or compose music.
AI in Finance: AI is used in the financial industry for things like algorithmic trading, fraud detection, and customer service, like a system that can spot suspicious activity on a credit card.
Smart Cities: AI can help make cities more efficient and sustainable, like using traffic cameras to reduce congestion.
Facial Recognition: AI can be used to recognize people's faces, but there are concerns about privacy and misuse, like using facial recognition to track people without their consent.
AI in Education: AI can be used to personalize learning, automate tasks, and provide educational support, like a program that can tutor students in math or English.
Will generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to learn from its activities and in the process of self-improvement will learn from its own mistakes?
Can the possible future combination of generative artificial intelligence technology and general artificial intelligence result in the creation of a highly technologically advanced super general artificial intelligence, which will improve itself, which may result in its self-improvement out of the control of man and thus become independent of the creator, which is man?
An important issue concerning the prospects for the development of artificial intelligence technology and its applications is also the question of obtaining by the built intelligent systems taught to perform highly complex tasks based on generative artificial intelligence a certain range of independence and self-improvement, repairing certain randomly occurring faults, errors, system failures, etc. For many years, there have been deliberations and discussions on the issue of obtaining a greater range of autonomy in making certain decisions on self-improvement, repair of system faults, errors caused by random external events by systems built on the basis of generative artificial intelligence technology. On the one hand, if there are built and developed, for example, security systems built on the basis of generative artificial intelligence technology in public institutions or commercially operating business entities providing a certain category of security for people, it is an important issue to give these intelligent systems a certain degree of autonomy in decision-making if in a situation of a serious crisis, natural disaster, geological disaster, earthquake, flood, fire, etc. a human being could make a decision too late relative to the much greater speed of response that an automated, intelligent, specific security, emergency response, early warning system for specific new risks, risk management system, crisis management system, etc. can have. However, on the other hand, whether a greater degree of self-determination is given to an automated, intelligent information system, including a specified security system then the scale of the probability of a failure occurring that will cause changes in the operation of the system with the result that the specified automated, intelligent and generative artificial intelligence-based system may be completely out of human control. In order for an automated system to quickly return to correct operation on its own after the occurrence of a negative, crisis external factor causing a system failure, then some scope of autonomy and self-decision-making for the automated, intelligent system should be given. However, to determine what this scope of autonomy should be is to first carry out a multifaceted analysis and diagnosis on the impact factors that can act as risk factors and cause malfunction, failure of the operation of an intelligent information system. Besides, if, in the future, generative artificial intelligence technology is enriched with super-perfect general artificial intelligence technology, then the scope of autonomy given to an intelligent information system that has been built with the purpose of automating the operation of a risk management system, providing a high level of safety for people may be high. However, if at such a stage in the development of super-perfect general artificial intelligence technology, however, an incident of system failure due to certain external or perhaps also internal factors were to occur, then the negative consequences of such a system slipping out of human control could be very large and currently difficult to assess. In this way, the paradox of building and developing systems developed within the framework of super-perfect general artificial intelligence technology may be realized. This paradox is that the more perfect, automated, intelligent system will be built by a human, an information system far beyond the capacity of the human mind, the capacity of a human to process and analyze large sets of data and information is, on the one hand, because such a system will be highly perfect it will be given a high level of autonomy to make decisions on crisis management, to make decisions on self-repair of system failure, to make decisions much faster than the capacity of a human in this regard, and so on. However, on the other hand, when, despite the low level of probability of an abnormal event, the occurrence of an external factor of a new type, the materialization of a new category of risk, which will nevertheless cause the effective failure of a highly intelligent system then this may lead to such a system being completely out of human control. The consequences, including, first of all, the negative consequences for humans of such a slipping of an already highly autonomous intelligent information system based on super general artificial intelligence, would be difficult to estimate in advance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the possible future combination of generative artificial intelligence and general artificial intelligence technologies result in the creation of a highly technologically advanced super general artificial intelligence that will improve itself which may result in its self-improvement escaping the control of man and thus becoming independent of the creator, which is man?
Will the generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to draw conclusions from its activities and in the process of self-improvement learn from its own mistakes?
Will generative artificial intelligence in the future in the process of self-improvement learn from its own mistakes?
The key issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Seeking insights on leveraging deep learning techniques to improve the precision of speech recognition systems when confronted with ambient noise, crucial for applications in diverse, real-world scenarios.
Seeking insights on the practical implementation and success factors of Neural Architecture Search (NAS) techniques in tailoring deep learning models for task-specific optimization.
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Recently, accelerated technological progress is being made, including the development of generative artificial intelligence technology. The aforementioned technological progress made in the improvement and implementation of ICT information technologies, including the development of applications of tools based on generative artificial intelligence is becoming a symptom of the transition of civilization to the next technological revolution, i.e. the transition from the phase of development of technologies typical of Industry 4.0 to Industry 5.0. Generative artificial intelligence technologies are finding more and more new applications by combining them with previously developed technologies, i.e. Big Data Analytics, Data Science, Cloud Computing, Personal and Industrial Internet of Things, Business Intelligence, Autonomous Robots, Horizontal and Vertical Data System Integration, Multi-Criteria Simulation Models, Digital Twins, Additive Manufacturing, Blockchain, Smart Technologies, Cyber Security Instruments, Virtual and Augmented Reality and other Advanced Data Mining technologies. In addition to this, the rapid development of generative AI-based tools available on the Internet is due to the fact that more and more companies, enterprises and institutions are creating their chatbots, which have been taught specific skills previously performed only by humans. In the process of deep learning, which uses artificial neural network technologies modeled on human neurons, the created chatbots or other tools based on generative AI are increasingly taking over from humans to perform specific tasks or improve their performance. The main factor in the growing scale of applications of various tools based on generative AI in various spheres of business activities of companies and enterprises is due to the great opportunities to automate complex, multi-criteria, organizationally advanced processes and reduce the operating costs of carrying them out with the use of AI technologies. On the other hand, certain risks may be associated with the application of AI generative technology in business entities, financial and public institutions. Among the potential risks are the replacement of people in various jobs by autonomous robots equipped with generative AI technology, the increase in the scale of cybercrime carried out with the use of AI, the increase in the scale of disinformation and generation of fake news on online social media through the generation of crafted photos, texts, videos, graphics presenting fictional content, non-existent events, based on statements and theses that are not supported by facts and created with the use of tools available on the Internet, applications equipped with generative AI technologies.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Will there be mainly opportunities or rather threats associated with the development of artificial intelligence applications?
I am conducting research in this area. Particularly relevant issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Is there any formula to find the sample size needed to create machine learning or deep learning models in the detection ,localization segmentation and classification of colon polyps
Has the development of artificial intelligence, including especially the current and prospective development of generative artificial intelligence and general artificial intelligence technologies, entered a phase that can already be called an open Pandora's box?
In recent weeks, the media covering the issue of the prospects for the development of artificial intelligence technology have made disturbing news. Rival leading technology companies developing ICT, Internet and Industry 4.0/5.0 information technologies have entered the next phase of generative artificial intelligence and general artificial intelligence development. Generative artificial intelligence technologies already present mainly through the ChatGPT intelligent chatbot, which was made available on the Internet at the end of 2022, and new further variants of it are already being made openly available to Internet users. Citizen interest in Internet-accessible intelligent chatbots and other tools based on generative artificial intelligence technology is very high. When OpenAI made the first publicly available versions of ChatGPT available to Internet users in November 2022, the number of users of the platform with this offering grew faster than the previously reported increases in the number of users on social media sites in the corresponding first months of their online availability. Dominant in the markets of online information services, the most recognizable brands of technology companies compete in the development of artificial intelligence technology no longer only in the development of generative artificial intelligence, which, through deep learning and the use of artificial neural networks, is taught specific abilities to intelligently perform jobs, tasks, write texts, participate in discussions, generate photos, videos, draw graphics and carry out other outsourced tasks that were previously performed only by humans. Currently dominating the markets for online information services, major technology companies are also competing to build increasingly sophisticated AI solutions referred to as general artificial intelligence. From the analysis of futurological projections of the possibilities of development of constantly improved artificial intelligence, there is a risk that at some point this development will enter another developmental phase, which will consist in the fact that advanced general artificial intelligence systems will already create even more advanced general artificial intelligence systems on their own, which with their computing power and advanced processing of large data sets, processing of data on platforms using accumulated huge data sets and Big Data Analytics information will far surpass the analytical capabilities of the human brain, human intelligence and the holistic computing power of all neurons of the human nervous system. This kind of development phase, in which advanced general artificial intelligence systems will already create even more advanced general artificial intelligence systems on their own, could lead to a situation where this development is out of human control. In such a situation, the risks associated with the uncontrolled development of advanced general artificial intelligence systems could increase strongly. The levels of risk could be so high that it could be compared to the situation of various very serious threats and even armegeddon of human civilization depicted in catastrophic futurological projections of the development of artificial intelligence out of human control depicted in many science fiction films. The catastrophic and/or bordering on horror movie images depicted in science fiction films suggest the potential future risks of a kind of arms race already taking place between the globally largest technology companies developing generative artificial intelligence and general artificial intelligence technologies. If this kind of development of generative artificial intelligence and general artificial intelligence technologies has entered this phase and there is no longer any possibility of stopping this development, then perhaps this development can already be called an open Pandora's box of artificial intelligence development.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Has the development of artificial intelligence, including, above all, the current and prospective development of generative artificial intelligence and general artificial intelligence technologies, entered a phase that can already be called an open Pandora's box of artificial intelligence development?
Has the development of artificial intelligence entered a phase that can already be called an open Pandora's box?
Artificial intelligence technology has been rapidly developing and finding new applications in recent years. The main determinants, including potential opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
𝟮𝟬𝟮𝟰 𝟱𝘁𝗵 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻, 𝗜𝗺𝗮𝗴𝗲 𝗮𝗻𝗱 𝗗𝗲𝗲𝗽 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 (𝗖𝗩𝗜𝗗𝗟 𝟮𝟬𝟮𝟰) 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗵𝗲𝗹𝗱 𝗼𝗻 𝗔𝗽𝗿𝗶𝗹 𝟭𝟵-𝟮𝟭, 𝟮𝟬𝟮𝟰.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: February 1, 2024
Registration Deadline: March 1, 2024
Final Paper Submission Date: March 15, 2024
Conference Dates: April 19-21, 2024
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
- Vision and Image technologies
- DL Technologies
- DL Applications
All accepted papers will be published by IEEE and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
This question delves into the domain of deep learning, focusing on regularization techniques. Regularization helps prevent overfitting in neural networks, but this question specifically addresses methods aimed at improving interpretability while maintaining high performance. Interpretability is crucial for understanding and trusting complex models, especially in fields like healthcare or finance. The question invites exploration into innovative and lesser-known techniques designed for this nuanced balance between model performance and interpretability.
What are the analytical tools supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks available on the Internet that can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business, implementation of economic, investment, business projects, etc.?
Since OpenAI brought ChatGPT online in November 2022, interest in the possibilities of using intelligent chatbots for various aspects of business operations has strongly increased among business entities. Intelligent chatbots originally only or mainly enabled conversations, discussions, answered questions using specific data resources, information and knowledge taken from a selection of multiple websites. Then, in the following months, OpenAI released other intelligent applications on the Internet, allowing Internet users to generate images, photos, graphics, videos, solve complex mathematical tasks, create software for new computer applications, generate analytical reports, process various types of documents based on the given commands and formulated commands. In addition to this, in 2023, other technology companies also began to make their intelligent applications available on the Internet, through which certain complex tasks can be carried out to facilitate certain processes, aspects of companies, enterprises, financial institutions, etc., and thus facilitate business. There is a steady increase in the number of intelligent applications and tools available on the Internet that can support the implementation of various aspects of business activities carried out in companies and enterprises. On the other hand, the number of new business applications of said smart applications is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the analytical tools available on the Internet supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks, which can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business activity, implementation of economic, investment, business projects, etc.?
What are the AI-enabled analytical tools available on the Internet that can be helpful to business?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
What is the future of generative artificial intelligence technology applications in finance and banking?
The banking sector is among those sectors where the implementation of new ICT, Internet and Industry 4.0/5.0 information technologies, including but not limited to the applications of generative artificial intelligence technology in finance and banking. Commercial online and mobile banking have been among the particularly fast-growing areas of banking in recent years. In addition, the SARS-CoV-2 (Covid-19) coronavirus pandemic, in conjunction with government-imposed lockdowns imposed on selected sectors of the economy, mainly service companies, and national quarantines, the development of online and mobile banking accelerated. Solutions such as contactless payments made with a smartphone developed rapidly. On the other hand, due to the acceleration of the development of online and mobile banking, the increase in the scale of payments made online, the conduct of online settlements related to the development of e-commerce, the scale of cybercriminal activity has increased since the pandemic. When the company OpenAI put its first intelligent chatbot, i.e. ChatGPT, online for Internet users in November 2022 and other Internet-based technology companies accelerated the development of analogous solutions, commercial banks saw great potential for themselves. More chatbots modeled on ChatGPT and new applications of tools based on generative artificial intelligence technology made available on the Internet quickly began to emerge. Commercial banks thus began to adapt the emerging new AI solutions to their needs on their own. The IT professionals employed by the banks thus proceeded with the processes of teaching intelligent chatbots, implementing tools based on generative AI to selected processes and activities performed permanently and repeatedly in the bank. Accordingly, AI technologies are increasingly being implemented by banks into cyber-security systems, processes for analyzing the creditworthiness of potential borrowers, improving marketing communications with bank customers, perfecting processes for automating remote telephone and Internet communications of banks' call center departments, developing market analyses carried out on Big Data Analytics platforms using large sets of data and information extracted from various bank information systems and from databases available on the Internet, online financial portals and thousands of processed posts and comments of Internet users contained in online social media pages, increasingly automated and generated in real time ba based on current large sets of information and data development of industry analysis and analysis and extrapolation into the future of market trends, etc. The scale of new applications of generative artificial intelligence technology in various areas of banking processes carried out in commercial banks is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What is the future of generative artificial intelligence technology applications in finance and banking?
What is the future of AI applications in finance and banking?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Can deep learning models be integrated with other diagnostic tools to improve the accuracy and reliability of cancer identification?
How to build a Big Data Analytics system based on artificial intelligence more perfect than ChatGPT that learns but only real information and data?
How to build a Big Data Analytics system, a Big Data Analytics system, analysing information taken from the Internet, an analytics system based on artificial intelligence conducting real-time analytics, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
Well, ChatGPT is not perfect in terms of self-learning new content and perfecting the answers it gives, because it happens to give confirmation answers when there is information or data that is not factually correct in the question formulated by the Internet user. In this way, ChatGPT can learn new content in the process of learning new but also false information, fictitious data, in the framework of the 'discussions' held. Currently, various technology companies are planning to create, develop and implement computerised analytical systems based on artificial intelligence technology similar to ChatGPT, which will find application in various fields of big data analytics, will find application in various fields of business and research work, in various business entities and institutions operating in different sectors and industries of the economy. One of the directions of development of this kind of artificial intelligence technology and applications of this technology are plans to build a system of analysis of large data sets, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on artificial intelligence conducting analytics in real time, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data. Some of the technology companies are already working on this, i.e. on creating this kind of technological solutions and applications of artificial intelligence technology similar to ChatGPT. But presumably many technology start-ups that plan to create, develop and implement business specific technological innovations based on a specific generation of artificial intelligence technology similar to ChatGPPT are also considering undertaking research in this area and perhaps developing a start-up based on a business concept of which technological innovation 4.0, including the aforementioned artificial intelligence technologies, is a key determinant.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build a Big Data Analytics system, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on Artificial Intelligence conducting real-time analytics, integrated with an Internet search engine, but an Artificial Intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Recently, the technology of generative artificial intelligence, which is taught certain activities, skills previously performed only by humans, has been developing rapidly. In the process of learning, artificial neural network technologies built on the likeness of human neurons are used, as well as deep learning technology. In this way, intelligent chatbots are created, which can converse with people in such a way that it can be increasingly difficult to diagnose, to distinguish whether we are talking to a human or an intelligent chatbot, a tool. Chatbots are taught to converse with the involvement of digital big data and information, and the process of conversation, including answering questions and executing specific commands is perfected through guided conversations. Besides, tools available on the Internet based on generative artificial intelligence are also able to create graphics, photos and videos according to given commands. Intelligent systems are also being created that specialize in solving specific tasks and are becoming more and more helpful to humans in solving increasingly complex problems. The number of new applications for specially created tools equipped with generative artificial intelligence is growing rapidly. However, on the other hand, there are not only positive aspects associated with the development of artificial intelligence. There are more and more examples of negative applications of artificial intelligence, through which, for example, fake news is created in social media, disinformation is generated on the Internet. There are emerging possibilities for the use of artificial intelligence in cybercrime and in deliberately shaping the general social awareness of Internet users on specific topics. In addition, for several decades there have been films in the genre of science fiction, in which futuristic visions of the future were presented, in which intelligent robots, equipped with artificial intelligence autonomous cyborgs (e.g. Terminator) or artificial intelligence systems managing the flight of a ship of an interplanetary manned mission (e.g. 2001 Space Odyssey), artificial intelligence systems and intelligent robots transformed humanity from a source of electricity to their needs (e.g. Matrix trilogy) and thus instead of helping people, they rebelled against humanity. This topic has become topical again. There are attempts to create autonomous human cyborgs equipped with artificial intelligence systems, robots able to converse with humans and carry out certain commands. Research work is being undertaken to create something that will imitate human consciousness, or what is referred to as artificial consciousness, as part of the improvement of generative artificial intelligence systems. There are many indications that humans are striving to create a thinking generative artificial intelligence. It cannot be ruled out that such a machine could independently make decisions contrary to human expectations which could lead to the annihilation of mankind. In view of the above, in the conditions of dynamic development of generative artificial intelligence technology, considerations about the potential dangers to humanity that may arise in the future from the development of generative artificial intelligence technology have once again returned to relevance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations which could lead to the annihilation of humanity?
Could a thinking generative artificial intelligence independently make decisions contrary to human expectations?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Why let the machine learn to think, think may be right or wrong. How about just let the machine memorize all the correct answers?
The bottleneck of LLMs is that it is actually impossible to label all data?
Full Paper
Abstract
Three recent breakthroughs due to AI in arts and science serve as motivation: An award winning digital image, protein folding, fast matrix multiplication. Many recent developments in artificial neural networks, particularly deep learning (DL), applied and relevant to computational mechanics (solid, fluids, finite-element technology) are reviewed in detail. Both hybrid and pure machine learning (ML) methods are discussed. Hybrid methods combine traditional PDE discretizations with ML methods either (1) to help model complex nonlinear constitutive relations, (2) to nonlinearly reduce the model order for efficient simulation (turbulence), or (3) to accelerate the simulation by predicting certain components in the traditional integration methods. Here, methods (1) and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3) relying on convolutional neural networks.. Pure ML methods to solve (nonlinear) PDEs are represented by Physics-Informed Neural network (PINN) methods, which could be combined with attention mechanism to address discontinuous solutions. Both LSTM and attention architectures, together with modern and generalized classic optimizers to include stochasticity for DL networks, are extensively reviewed. Kernel machines, including Gaussian processes, are provided to sufficient depth for more advanced works such as shallow networks with infinite width. Not only addressing experts, readers are assumed familiar with computational mechanics, but not with DL, whose concepts and applications are built up from the basics, aiming at bringing first-time learners quickly to the forefront of research. History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics, even in well-known references. Positioning and pointing control of a large-deformable beam is given as an example.
In navigating the complex landscape of medical research, addressing interpretability and transparency challenges posed by deep learning models is paramount for fostering trust among healthcare practitioners and researchers. One formidable challenge lies in the inherent complexity of these algorithms, often operating as black boxes that make it challenging to decipher their decision-making processes. The intricate web of interconnected nodes and layers within deep learning models can obscure the rationale behind predictions, hindering comprehension. Additionally, the lack of standardized methods for interpreting and visualizing model outputs further complicates matters. Striking a balance between model sophistication and interpretability is a delicate task, as simplifying models for transparency may sacrifice their intricate capacity to capture nuanced patterns. Overcoming these hurdles requires concerted efforts to develop transparent architectures, standardized interpretability metrics, and educational initiatives that empower healthcare professionals to confidently integrate and interpret deep learning insights in critical scenarios.
The bottleneck of LLMs is that it is actually impossible to label all knowledge?
DeepLearning/LLMs are ultimately efficiency problem of data production?
Why are top researchers all studying theoretical deep learning?