Science topic

Big Data - Science topic

In information technology, big data is a loosely-defined term used to describe data sets so large and complex that they become awkward to work with using on-hand database management tools.
Questions related to Big Data
  • asked a question related to Big Data
Question
2 answers
Hey everyone,
I'm writing my master thesis on the impact of artificial intelligence on business productivity.
This study is mainly aimed at those of you who develop AI or use these technologies in your professional environment.
This questionnaire will take no more than 5 minutes to complete, and your participation is confidential!
Thank you in advance for your time and contribution!
To take part, please click on the link below: https://forms.gle/fzzHq4iNqGUiidTWA
Relevant answer
Answer
AI tools continue to have a positive impact on productivityOf those surveyed, 64% of managers said AI's output and productivity is equal to the level of experienced and expert managers, and potentially better than any outputs delivered by human managers altogether.
Regards,
Shafagat
  • asked a question related to Big Data
Question
1 answer
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 technologies and powered by renewable and carbon-free energy sources?
If a Big Data Analytics data center is equipped with advanced generative artificial intelligence technology and is powered by renewable and carbon-free energy sources, can it be referred to as sustainable, pro-climate, pro-environment, green, etc.?
Advanced analytical systems, including complex forecasting models that enable multi-criteria, highly sophisticated, big data and information processing-based forecasts of the development of multi-faceted climatic, natural, social, economic and other processes are increasingly based on new Industry 4.0/5.0 technologies, including Big Data Analytics and machine learning, deep learning and generative artificial intelligence. The use of generative artificial intelligence technologies enables the application of complex data processing algorithms according to precisely defined assumptions and human-defined factors. The use of computerized, integrated business intelligence information systems allows real-time analysis on the basis of continuously updated data provided and the generation of reports, reports, expert opinions in accordance with the defined formulas for such studies. The use of digital twin technology allows computers to build simulations of complex, multi-faceted, prognosticated processes in accordance with defined scenarios of the potential possibility of these processes occurring in the future. In this regard, it is also important to determine the probability of occurrence in the future of several different defined and characterized scenarios of developments, specific processes, phenomena, etc. In this regard, Business Intelligence analytics should also make it possible to precisely determine the level of probability of the occurrence of a certain phenomenon, the operation of a process, the appearance of described effects, including those classified as opportunities and threats to the future development of the situation. Besides, Business Intelligence analytics should enable precise quantitative estimation of the scale of influence of positive and negative effects of the operation of certain processes, as well as factors acting on these processes and determinants conditioning the realization of certain scenarios of situation development. Cloud computing makes it possible, on the one hand, to update the database with new data and information from various institutions, think tanks, research institutes, companies and enterprises operating within a selected sector or industry of the economy, and, on the other hand, to enable simultaneous use of a database updated in this way by many beneficiaries, many business entities and/or, for example, also by many Internet users in a situation where the said database would be made available on the Internet. In a situation where Internet of Things technology is applied, it would be possible to access the said database from the level of various types of devices equipped with Internet access. The application of Blockchain technology makes it possible to increase the scale of cybersecurity of the transfer of data sent to the database and Big Data information as part of the updating of the collected data and as part of the use of the analytical system thus built by external entities. The use of machine learning and/or deep learning technologies in conjunction with artificial neural networks makes it possible to train an AI-based system to perform multi-criteria analysis, build multi-criteria simulation models, etc. in the way a human would. In order for such complex analytical systems that process large amounts of data and information to work efficiently it is a good solution to use state-of-the-art super quantum computers characterized by high computing power to process huge amounts of data in a short time. A center for multi-criteria analysis of large data sets built in this way can occupy quite a large floor space equipped with many servers. Due to the necessary cooling and ventilation system and security considerations, this kind of server room can be built underground. while due to the large amounts of electricity absorbed by this kind of big data analytics center, it is a good solution to build a power plant nearby to supply power to the said data center. If this kind of data analytics center is to be described as sustainable, in line with the trends of sustainable development and green transformation of the economy, so the power plant powering the data analytics center should generate electricity from renewable energy sources, e.g. from photovoltaic panels, windmills and/or other renewable and emission-free energy sources of such a situation, i.e., when a data analytics center that processes multi-criteria Big Data and Big Data Analytics information is powered by renewable and emission-free energy sources then it can be described as sustainable, pro-climate, pro-environment, green, etc. Besides, when the Big Data Analytics analytics center is equipped with advanced generative artificial intelligence technology and is powered by renewable and emission-free energy sources then the AI technology used can also be described as sustainable, pro-climate, pro-environment, green, etc. On the other hand, the Big Data Analytics center can be used to conduct multi-criteria analysis and build multi-faceted simulations of complex climatic, natural, economic, social processes, etc. with the aim of, for example. to develop scenarios of future development of processes observed up to now, to create simulations of continuation in the future of diagnosed historical trends, to develop different variants of scenarios of situation development according to the occurrence of certain determinants, to determine the probability of occurrence of said determinants, to estimate the scale of influence of external factors, the scale of potential materialization of certain categories of risk, the possibility of the occurrence of certain opportunities and threats, estimation of the level of probability of materialization of the various variants of scenarios, in which the potential continuation of the diagnosed trends was characterized for the processes under study, including the processes of sustainable development, green transformation of the economy, implementation of sustainable development goals, etc. Accordingly, the data analytical center built in this way can, on the one hand, be described as sustainable, since it is powered by renewable and emission-free energy sources. In addition to this, the data analytical center can also be helpful in building simulations of complex multi-criteria processes, including the continuation of certain trends of determinants influencing the said processes and the factors co-creating them, which concern the potential development of sustainable processes, e.g. economic, i.e. concerning sustainable economic development. Therefore, the data analytical center built in this way can be helpful, for example, in developing a complex, multifactor simulation of the progressive global warming process in subsequent years, the occurrence in the future of the negative effects of the deepening scale of climate change, the negative impact of these processes on the economy, but also to forecast and develop simulations of the future process of carrying out a pro-environmental and pro-climate transformation of the classic growth, brown, linear economy of excess to a sustainable, green, zero-carbon zero-growth and closed-loop economy. So, the sustainable data analytical center built in this way will be able to be defined as sustainable due to the supply of renewable and zero-carbon energy sources, but will also be helpful in developing simulations of future processes of green transformation of the economy carried out according to certain assumptions, defined determinants, estimated probability of occurrence of certain impact factors and conditions, etc. orz estimating costs, gains and losses, opportunities and threats, identifying risk factors, particular categories of risks and estimating the feasibility of the defined scenarios of the green transformation of the economy planned to be implemented. In this way, a sustainable data analytical center can also be of great help in the smooth and rapid implementation of the green transformation of the economy.
Kluczowe kwestie dotyczące problematyki zielonej transformacji gospodarki opisałem w poniższym artykule:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
Zastosowania technologii Big Data w analizie sentymentu, analityce biznesowej i zarządzaniu ryzykiem opisałem w artykule mego współautorstwa:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If a Big Data Analytics data center is equipped with advanced generative artificial intelligence technology and is powered by renewable and carbon-free energy sources, can it be described as sustainable, pro-climate, pro-environment, green, etc.?
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 technologies and powered by renewable and carbon-free energy sources?
How to build a sustainable data center based on Big Data Analytics, AI, BI and other Industry 4.0/5.0 and RES technologies?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
In my opinion, Building a sustainable data center will needing environment, sustainability and goverence kind of model. Virtualization and Consolidation, Green Building Design takes the first part. Cooling systems and training takes the second place.
  • asked a question related to Big Data
Question
3 answers
How does generative artificial intelligence technology combined with Big Data Analytics and other Industry 4.0 technologies help in planning and improving production logistics management processes in business entities, companies and enterprises?
Production logistics management in a manufacturing company is currently one of the key areas of business management that significantly affects the level of technical and organizational efficiency of business operations. The change in the level of technical and organizational efficiency of business operations also usually has a significant impact and correlates with the issue of business efficiency and affects the financial results generated in the business entity. Among the key segments of logistics in the enterprise are also internal production logistics, on the way of organization of which the efficiency of the operation of production processes and the efficiency of the enterprise also largely depends. In recent years, more and more companies and enterprises have been optimizing production logistics through the implementation of information systems and automation of individual operations in the process. Production logistics is mainly concerned with ensuring the optimal flow of materials and information in the process of producing all types of goods. Production logistics does not deal with the technology of production processes, but only with the organization of the production system together with the storage and transport environment. Production logistics is mainly concerned with the optimization of all operations related to the production process, such as: supplying the plant with raw materials, semi-finished products and components necessary for production; transporting items between successive stages of production; and transferring the finished product to disposal warehouses. Precisely defining optimal production logistics is a lengthy process, requiring analysis and modification of almost every process taking place in a company. One of the key factors in the optimization of production logistics is the reduction of inventory levels and their adjustment to the ongoing production process. This translates directly into a decrease in storage costs. Effective management of production logistics should ensure timely delivery, while maintaining high product quality. Effective production logistics management can be supported by the implementation of new Industry 4.0/5.0 technologies, including Big Data and generative artificial intelligence.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How does the technology of generative artificial intelligence, combined with Big Data Analytics and other Industry 4.0 technologies, help to plan and improve production logistics management processes in business entities, companies and enterprises?
How does generative artificial intelligence technology help in planning and improving production logistics processes in an enterprise?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Dear Sutarman
This is Dr. N.S. Santhi, Professor and Head / MBA, KSR College of Engineering, Tiruchengode - 637 15, Tamil Nadu, India. I would become a review of IJEMS. Hereby I have given my details. Kindly do the needful.
name and title : Dr. N.S. Santhi
Professor & Head
Scopus ID : 57211081960
Whatsapp : +91 9500466527
  • asked a question related to Big Data
Question
6 answers
Do companies running social media portals consciously shape the general social awareness of citizens, Internet users through the specific information policies applied?
In recent years, there have been an increasing number of examples of situations of deliberate practices in which companies operating social media portals consciously shape the general social awareness of citizens, Internet users through specific information policies applied. The Senate Committees of Inquiry at the U.S. Capitol, which have been taking place for several years, address, among other things, the issue of verifying the use of, for example, algorithms on Facebook platforms that promote certain content, including not only socially positive content, but also socially negative content. The aforementioned algorithms are then changed so that the scale of social negativity is reduced. However, recently there have been an increasing number of similar socially negative cases of algorithms promoting specific political content, e.g. promoting content typical of right-wing political options and limiting the spread of certain social media sites typical of left-wing political content. Thus, these are situations of intentional discrimination against a part of the community of citizens holding certain political views, which the owners of certain companies operating social media portals have deemed to be contrary to the information policy applied in their social media and/or the specific ideology promoted in these media. This type of activity does not correlate with the issue of freedom of speech, unrestricted development of the information society, democracy.
Recently, companies running social media sites have been improving the aforementioned media through the implementation of new Industry 4.0/5.0 technologies, including Big Data Analytics and generative artificial intelligence. The aforementioned technologies can also be used to technically improve the algorithms that control and promote selected content typed and passed on by Internet users, users of the aforementioned online media, which is an important part of shaping information policy in these media.
I have described the issues of the role of information, information security, including business information transferred through social media, and the application of Industry 4.0/5.0 technologies to improve data and information transfer and processing systems in social media in the following articles:
The postpandemic reality and the security of information technologies ICT, Big Data, Industry 4.0, social media portals and the Internet
The Importance and Organization of Business Information Offered to Business Entities in Poland via the Global Internet Network
THE QUESTION OF THE SECURITY OF FACILITATING, COLLECTING AND PROCESSING INFORMATION IN DATA BASES OF SOCIAL NETWORKING
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Do the companies running social media portals consciously shape the general social consciousness of citizens, Internet users through the specific information policies applied?
Do companies running social media portals shape the general social consciousness of citizens through the specific information policies applied?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Yes, social media companies significantly shape social awareness through their information policies, dictating what content users see. Their algorithms and content moderation decisions influence public discourse and perceptions of various issues, highlighting their substantial impact on society's collective understanding.
  • asked a question related to Big Data
Question
2 answers
Does the application of Big Data Analytics and artificial intelligence technologies in the credit scoring processes of potential borrowers increase the profitability of commercial banks' lending activities?
Does the application of Big Data Analytics and artificial intelligence technologies in the processes of screening the creditworthiness of potential borrowers in order to improve, among other things, credit scoring analytics and credit risk management increase the profitability of commercial banks' lending activities?
In recent years, the scale of application of ICT and Industry 4.0/5.0, including Big Data Analytics and Artificial Intelligence technologies in financial institutions, including commercial banks, has been increasing. The banking sector is among those sectors of the economy where the implementation of new information technologies used to build banking information systems is progressing rapidly. This process in highly developed countries has been taking place since the 1960s. Subsequently, the development of computer science, personal computer technology in the 1970s and 1980s, the development of the Internet and business applications of Internet technology since the 1990s and then the development of technologies typical of Industry 4.0/5.0 set the trends of technological progress, the effects of which in the form of new technological solutions quickly found applications in financial institutions. Commercial banks operating in the model of classic deposit-credit banking usually generate the largest part of their revenues from the sale of bank loans and credits. Large universal banks also develop selected elements of investment banking, in which they finance the construction of housing estates through their own development companies, make financial transactions with securities, financial transactions in foreign exchange markets and other capital markets. In all these areas of activity, the key categories of banking risk that banks manage include credit and interest rate risk and other financial risks, i.e. liquidity risk, debt risk. In addition, the key categories of risk that the bank manages in its banking operations include asset-liability mismatch risk in the balance sheet and various categories of operational risks related to the performance of certain activities at the bank, including personnel operational risk related to the staff employed, technical operational risk related to the technical equipment used, system operational risk related to the IT systems used, etc. On the other hand, risks operating in the bank's environment and affecting the bank's operations and indirectly also the bank's financial performance include market risk of changes in the prices of specific assortments relating to specific markets in which banks operate; foreign exchange risk associated with transactions made using different currencies; investment risk within investment banking; systemic risk associated with the functioning of the financial system; political risk associated with the government's economic policy; risks of high volatility of macroeconomic development of the economy associated with changes in the economy's economic situation in the context of business cycles realized on a multi-year scale, etc. However, in a situation where lending activities are the main types of sources of income for a commercial bank then a particularly important category of banking risk that the bank manages is credit risk. On the other hand, due to the rapid development of electronic, Internet and mobile banking, cyber risk management is also growing in importance. New ICT information technologies and Industry 4.0/5.0, including Big Data Analytics and Artificial Intelligence technologies, can be increasingly helpful in managing each of the aforementioned risk categories. The aforementioned new technologies prove to be particularly helpful in the situation of their effective implementation into banking activities in order to improve the processes of managing, among other things, credit risk. An important element of individual credit risk management, i.e. with regard to individual credit transactions, are the methodologies, procedures, processes, etc. concerning the analysis of a potential borrower's creditworthiness and credit risk arising from a bank loan carried out in commercial banks. In view of the above, the implementation of new technologies to support the implementation of the processes of examining the creditworthiness of potential borrowers and improving, among other things, credit scoring analytics, are particularly important aspects of credit risk management, which may translate into increased profitability of commercial banks' bank lending activities.
I described selected issues of improving credit risk management processes, including the issue of screening the creditworthiness of potential borrowers and credit scoring analytics, in an article of my co-authorship:
Determinants of credit risk management in the context of the development of the derivatives market and the cyclical conjuncture economic processes
IMPROVING MANAGING THE CREDIT RISK IN CONDITIONS SLOWING ECONOMIC GROWTH
THE IMPLEMENTATION OF AN INTEGRATED CREDIT RISK MANAGEMENT IN OPERATING IN POLAND COMMERCIAL BANKS
Importance and implementation of improvement process of prudential instruments in commercial banks on the background of anti-crisis socio-economic policy in Poland
GLOBALIZATIONAL AND NORMATIVE DETERMINANTS OF THE IMPROVEMENT OF THE BANKING CREDIT RISK MANAGEMENT IN POLAND
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Does the application of Big Data Analytics and artificial intelligence technologies in the processes of screening the creditworthiness of potential borrowers in order to improve, among other things, credit scoring analytics and credit risk management, result in an increase in the profitability of commercial banks' bank lending activities?
Does the application of Big Data Analytics and artificial intelligence technologies in the credit scoring processes of potential borrowers result in increased profitability of commercial banks' lending business?
Can Big Data Analytics and artificial intelligence help improve credit scoring and increase the profitability of commercial banks' lending activities?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Certainly, Artificial intelligence has the potential to enhance credit ratings for commercial banks, as generative AI offers a transformative approach to managing credit risk. This involves gathering and scrutinizing customer data, such as inter-bank transactions, to revolutionize banking practices.
  • asked a question related to Big Data
Question
4 answers
In my opinion, I could say:
Benefits:
  1. Accelerated Drug Discovery
  2. Cost Reduction
  3. Optimized Clinical Trials
Challenges:
  1. Dealing with big data
  2. Over-fitting and Generalization
  3. Human Expertise and Collaboration
Relevant answer
Answer
Utilizing artificial intelligence (AI) and machine learning (ML) in drug discovery and development processes presents numerous potential benefits are Accelerated Drug Discovery, Target Identification, Drug Repurposing, Optimized Clinical Trials, Personalized Medicine and challenges are Data Quality and Quantity, Interpretability, Regulatory Hurdles, Validation and Reproducibility etc.
Regards
Jogeswar Tripathy
  • asked a question related to Big Data
Question
1 answer
How to develop sustainable modern urban agglomerations according to the green smart city model with the application of Industry 4.0 technologies, including Big Data and artificial intelligence technologies, and in accordance with the concept of sustainable economic development, the principles of green economic transformation, circular economy, reduction of greenhouse gas emissions, development of urban parks and other green areas, self-sufficiency in energy and food production, environmental and climate social responsibility, etc.?
An important attribute of a modern agglomeration developed in accordance with the green smart city model is also the consideration of low greenhouse gas emissions, minimization of energy and resource consumption, minimization of the agglomeration's negative impact on the environment, on the surrounding biosphere and climate. Accordingly, agglomerations developed according to the green smart city should also become increasingly low-emission, energy-intensive and more neutral to the planet's biosphere and climate.
Urban development taking place according to the green smart city concept should also take into account the sustainable economic development of the urban agglomeration, which takes into account the principles of green economic transformation, the circular economy, reducing greenhouse gas emissions and maximizing the issue of self-sufficiency in energy and food production. In terms of achieving energy self-sufficiency, the development of renewable and emission-free energy sources can be helpful. Reducing greenhouse gas emissions can be achieved not only through the development of renewable and zero-emission energy sources, but also by insulating the facades of existing buildings to reduce energy intensity, and through the development of means of transportation within the framework of public urban transportation, taking into account electromobility and hydrogen-powered transportation. On the other hand, achieving self-sufficiency in food production can be helped by vertical vegetable and fruit crops created in abandoned office buildings and factory halls, as well as in new multi-story buildings created specifically for this purpose.
In addition to this, an important issue in achieving zero-carbon cities is to take into account the carbon footprint of not only the built housing blocks, office buildings, market halls, shopping centers, factory halls, public administration and public service buildings, etc., but also the greenhouse gas emissions emitted during the construction of the said housing developments, buildings, office buildings, halls, etc. As a result, there will probably be more than once issues to be resolved regarding the choice of less-emitting options, in which one will have to choose either to revitalize, insulate the exterior façade, install photovoltaic panels, etc. on existing buildings, or rather to demolish them and build new ones in their place, in which more modern construction, energy and material technologies, etc. will be used, ensuring greater durability, lower emissions and energy efficiency or even zero-energy efficiency of the new buildings.
In order to reduce the negative impact of urban agglomeration on the biosphere, it is essential to build wastewater treatment plants for rivers leaving the cities, and to establish water purification systems for ponds, lakes and other water bodies located in the city area or its suburbs. In order to reduce emissions, improve air quality, increase humidity and reduce air temperature, it is essential to create additional urban parks and other green areas. In the situation of limited space that can be allocated for the creation of additional green areas, urban parks, flower meadows, etc., an optional solution is the creation of many so-called pocket parks, i.e. parks and other green areas occupying small areas, which were created after removing parts of previously too extensive created concrete sidewalks, asphalt roadways, paved with concrete cubes squares, etc.
In view of the above, sustainable, modern urban agglomerations developed according to the green smart city model with the application of Industry 4.0 technologies, including Big Data and artificial intelligence technologies, should also be developed in accordance with the concept of sustainable economic development, the principles of green economy transformation, circular economy, reduction of greenhouse gas emissions, development of urban parks and other green areas, self-sufficiency in energy and food production, environmental and climate social responsibility, etc.
Key aspects of the negative effects of the progressive process of global warming and the associated necessary acceleration of the processes of green transformation of the economy in order to decarbonize the economy, slow down the process of global warming, protect the climate, biosphere and biodiversity of the planet's natural ecosystems I described in the article:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to develop sustainable, modern urban agglomerations according to the green smart city model with the application of Industry 4.0 technologies, including Big Data and artificial intelligence technologies, and in accordance with the concept of sustainable economic development, the principles of green economy transformation, circular economy, reduction of greenhouse gas emissions, development of urban parks and other green areas, self-sufficiency in energy and food production, environmental and climate social responsibility, etc.?
How to develop green smart city with the application of Industry 4.0 technologies and the concept of sustainable economic development?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Foreseeing how to implement building construction and waste from construction rather than only focusing on green building concepts is the way to help sustainable development.
  • asked a question related to Big Data
Question
4 answers
Can artificial intelligence combined with Big Data Analytics help to reconstruct the genomes of recently extinct due to humans of various species of flora and/or fauna and thus restore the former biodiversity of the planet's natural ecosystems?
Due to the negative impact of human civilization on the planet's biosphere and climate, the sixth great extinction of many species of flora and fauna is currently taking place on planet Earth. The result is a rapidly progressive decline in the biodiversity of the planet's natural ecosystems. The aim is to halt or at least significantly slow down the progressive climate change, i.e. caused by increasing greenhouse gas emissions of the increasingly rapid global warming process. Many species of flora and fauna have already become extinct, many are on the verge of extinction due to a sharp decline in the number of individuals of the species. Some animal species are already found exclusively or almost exclusively in zoos. Few animal species have managed to be reconstructed on the basis of deliberate cross-breeding of species and varieties of closely related species. In order to increase the scale of the possibility of reproducing extinct species in the future, gene banks are being created, in which the genomes of specific species of flora and fauna are stored, including, above all, species threatened with extinction in the near future. Over the past few decades, insect populations, including pollinators, have declined by tens of percent in many parts of the world. The continuation of these processes, in addition to the rapidly advancing process of decline in biodiversity, the impoverishment of the planet's biosphere, will cause a rapidly advancing process of decline in the production of agricultural crops under the intensive, unsustainable and productive model of human farming. In countries occurring in the tropics, natural tropical rainforests continue to be cut down only to create new areas of arable land for the creation of arable fields where crops are grown to feed livestock generating high methane emissions. In addition, the new arable land created in this way is quickly eroded and the restoration of forest ecosystems back in these areas is also hindered for this reason. Coral reefs in the seas and oceans are also dying out, and the populations of marine fish and other animal species living in the seas and oceans are rapidly declining. Key factors in the still large-scale great sixth extinction of species of flora and fauna include continued deforestation, the cutting down of natural forest ecosystems to increase cultivated areas, the degradation of natural ecosystems caused by industrial development, continued high levels of environmental pollution, continued rising greenhouse gas emissions generating a progressive global warming process, etc. In view of the above, unsustainable intensive economy is causing a rapidly progressive decline in the biodiversity of the planet's natural ecosystems, and attempts are being made to counteract this. In order to slow down the progressive process of global warming, reduce greenhouse gas emissions, reduce the level of environmental pollution and devastation of natural ecosystems, reduce the scale of extinction of species of flora and fauna, slow down the process of decline in the biodiversity of natural ecosystems, it is necessary to urgently accelerate the processes of carrying out pro-environmental transformation of the economy in order to efficiently build a sustainable, green, zero-carbon zero-growth and closed-cycle economy. In the process of building a sustainable, green, zero-carbon zero-growth economy and a closed loop economy, new ICT information technologies and Industry 4.0/5.0 can help, including generative artificial intelligence technologies and multi-criteria processing of large data sets as part of ongoing research and analytical processes using Big Data Analytics. New technologies can help increase the efficiency and accelerate the process of a sustainable, green, zero-carbon, zero-growth and closed-loop economy and thus help protect the climate, biosphere and biodiversity of the planet's natural ecosystems. As part of the application of new Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics, it is possible to increase the efficiency of research conducted with the goal of reconstructing the genomes of extinct species of flora and/or fauna. In this regard, perhaps artificial intelligence combined with Big Data Analytics can help reconstruct the genomes of various species of flora and/or fauna that have recently become extinct due to humans and thus restore the former biodiversity of the planet's natural ecosystems.
I am conducting research in the issue of key determinants of the legitimacy of urgently carrying out a green transformation of the economy in order to build a sustainable, green circular economy. I have included the conclusions of my research in the following article:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can artificial intelligence coupled with Big Data Analytics help in reconstructing the genomes of various species of flora and/or fauna that have recently become extinct due to humans, and thus restore the former biodiversity of the planet's natural ecosystems?
Can artificial intelligence coupled with Big Data Analytics help restore the genomes of extinct species of flora and/or fauna?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Mister Kh. Boymurodov says : "I work with distribution, species composition and molecular analysis of hydrobionts in aquatic ecosystems. I propose to develop a grand project on the distribution of hydrobionts in aquatic ecosystems. Studying the effects of climate change on hydrobionts is an important challenge."
  • asked a question related to Big Data
Question
4 answers
How to use artificial intelligence technology and Big Data to help develop critical thinking in young people and the goal of reducing disinformation that targets children and young people through online social media?
Disinformation is currently the most frequently cited problem occurring in social media from which children and young people gain knowledge. Companies engage advertising companies that specialize in running online advertising campaigns, in which advertising spots, videos and banners informing people about promotional offers for products and services sold are posted on social media. The aforementioned online social media are also viewed by children and teenagers. For some of these social media, the primary audiences for profiled information and marketing messages are mainly school-aged youth. Children and adolescents are particularly susceptible to the influence of information transferred through the aforementioned online media. Advertisements are thematically profiled to correlate with issues that are in the field of the main interests of children and adolescents. Unfortunately, many offers of various products and services promoted through online advertising campaigns are not suitable for children and adolescents and/or generate a lot of negative effects. Nowadays, applications based on generative artificial intelligence technology, intelligent chatbots, are increasingly used to generate banners, graphics, photos, videos, animations, advertising spots. With the help of these tools, which are available on the Internet, it is possible to create a photo, graphic or video on the basis of a written command, i.e. a kind of digitally generated works of such high graphic quality that it is very difficult to determine whether they are, for example, authentic photos taken with a camera or smartphone or are supposedly photos generated by an intelligent chatbot. It is especially difficult to resolve this kind of issue for children and young people who view these kinds of artificial intelligence technology-generated "works" used in banners or advertising videos. It is necessary, therefore, that education should develop in children the ability to think critically, to ask questions, to question the veracity of the content of advertisements, not to accept uncritically everything found in online social media. It is essential to add the issue of learning critical thinking to the process of educating children and young people. The goal of such education should be, among other things, to develop in children and young people the ability to identify disinformation, including the increasingly common factoids, deepfakes, etc. in online social media. In connection with the fact that in the creation of disinformation occurring mainly in the aforementioned social media are involved applications based on artificial intelligence, so children and adolescents should, within the framework of education, learn about the applications available on the Internet based on generative artificial intelligence technology, through which it is possible to generate texts, graphics, photos, drawings, animations and videos in a partially automated manner according to a given verbal command. This is how the applications available on the Internet based on the new technologies of Industry 4.0/5.0, including generative artificial intelligence and Big Data technologies, should be used to help develop critical thinking and a kind of resistance to misinformation in young people. During school lessons, students should learn about the capabilities of AI-based applications available on the Internet and use them creatively to develop critical thinking skills. In this way, it is possible to reduce disinformation directed through online social media towards children and young people.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to use artificial intelligence and Big Data technologies to help develop critical thinking in young people and the goal of reducing misinformation that targets children and young people through online social media?
How can artificial intelligence technology be used to help educate youth in critical thinking and the ability to identify disinformation?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I will be teaching Critical Thinking as a stand-alone full semester course in a BSc in the Fall. I've incorporated AI by requiring learners to evaluate the AI information comparatively to information that is available in other formats (most prominently research and scientific consensus information) for specific topics, including the topic of CT, which is based primarily in self-evaluation of own thinking processes. AI cannot, as far as I know, mimic CT because it requires personal insight. For now this makes AI another thought process challenge for the critical-thinking learner. They are not expected to rely on any information provided to them by AI, as they are not expected to rely on any information provided to them in other formats, until they've evaluated both the information and their thought processes in relation to that information and come to a logical and justifiable position.
  • asked a question related to Big Data
Question
1 answer
How to curb the growing scale of disinformation, including social media-generated factoids, deepfakey through the use of generative artificial intelligence technology?
In order to reduce the growing scale of disinformation, including disinformation generated in social media through in the increasing scale of emerging fakenews, deepfakes, disinformation generated through the use of applications available on the Internet based on generative artificial intelligence technology, the just mentioned GAI technology can be used. Constantly improved, taught to carry out new types of activities, tasks and commands, intelligent chatbots and other applications based on generative artificial intelligence technology can be applied to identify instances of disinformation spread primarily in online social media. The aforementioned disinformation is particularly dangerous for children and adolescents, it can significantly affect the world view of the general public's awareness of certain issues, it can affect the formation of development trends of certain social processes, it can affect the results of parliamentary and presidential elections, it can also affect the level of sales of certain types of products and services, and so on. In the absence of a developed institutional system of media control institutions, including the new online media; lack of a developed system of control of the level of objectivity of content directed to citizens in advertising campaigns; lack of consideration of the issue of disinformation analysis by competition and consumer protection institutions; lack of or poorly functioning democracy protection institutions; lack of institutions that reliably take care of a high level of journalistic ethics and media independence, the scale of disinformation of citizens by various groups of influence, including public institutions and commercially operating business entities may be high and may generate high social costs. Accordingly, new technologies of Industry 4.0/5.0, including generative artificial intelligence (GAI) technologies, should be involved in order to reduce the scale of growing disinformation, including the generation of factoids, deepfakes, etc. in social media. The aforementioned GAI technologies can help identify fakenews pseudo-journalistic content, identify photos containing deepfakes, identify factually incorrect content contained in banners, spots and advertising videos published in various media as part of ongoing advertising and promotional campaigns aimed at activating sales of various products and services.
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in an article of my co-authorship:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to curb the growing scale of disinformation, including social media generated factoids, deepfakey through the use of generative artificial intelligence technology?
How to curb disinformation generated in social media using artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Dear Prof. Prokopowicz!
You spotted a real problem to fight with. I found a case study "Elections in 2024" that illustrates blind spots...:
a) CHARLOTTE HU (2024). How AI Bots Could Sabotage 2024 Elections around the World: AI-generated disinformation will target voters on a near-daily basis in more than 50 countries, according to a new analysis, Scientific American 24 February 2024, Quoting: "Currently, AI-generated images or videos are easier to detect than text; with images and videos, Du explains, “you have to get every pixel perfect, so most of these tools are actually very inaccurate in terms of lighting or other effects on images.” Text, however, is the ultimate challenge. “We don’t have tools with any meaningful success rate that can identify LLM-generated texts,” Sanderson says." Available at:
b) Heidi Ledford (2024). Deepfakes, trolls and cybertroopers: how social media could sway elections in 2024: Faced with data restrictions and harassment, researchers are mapping out fresh approaches to studying social media’s political reach. News, Nature 626, 463-464 (2024) Quoting: "Creative workarounds: ...behind the scenes, researchers are exploring different ways of working, says Starbird, such as developing methods to analyse videos shared online and to work around difficulties in accessing data. “We have to learn how to get insights from more limited sets of data,” she says... Some researchers are using qualitative methods such as conducting targeted interviews to study the effects of social media on political behaviour, says Kreiss. Others are asking social media users to voluntarily donate their data, sometimes using browser extensions. Tucker has conducted experiments in which he pays volunteers a small fee to agree to stop using a particular social media platform for a period, then uses surveys to determine how that affected their exposure to misinformation and the ability to tell truth from fiction."
Yours sincerely, Bulcsu Szekely
  • asked a question related to Big Data
Question
2 answers
Does the application of generative artificial intelligence technology and Big Data Analytics enable the improvement of computerized Business Intelligence business management support systems?
The growing volume of data that is processed in companies and enterprises determines the need to involve specialized software and information systems, thanks to which both the analysis of data will be carried out effectively and the results of the analytics carried out will enable the use of the resulting knowledge to support the management processes of the business entity. The issue of the scope of large amounts of data acquired from various sources, their storage and processing is related to Big Data Analytics technology. However, in order to significantly increase the efficiency of processing large sets of data and information with the use of this type of analytics to support the management process of the business entity, computerized, multi-module business intelligence applications are particularly helpful in this regard. The combination of database technologies and analytical platforms of Big Data Analytics and Business Intelligence type applications makes it possible, on the basis of large data sets containing not fully structured and organized data, to generate useful information for a specific entity, as well as concretized and sublimated substantive knowledge used to support the management process of an organization, institution, business entity, etc. In terms of the key objectives of the application of knowledge generated in this way, it is distinguished to improve the quality of business decisions, reduce the risk of making errors during the processes of managing the organization, improve risk management systems, increase the effectiveness of early warning systems of new threats and development opportunities, etc. Analytics conducted on large data sets and implemented using Big Data Analytics and Business Intelligence applications can help in the processes of carrying out restructuring, developing a new strategy, investment project, marketing plan, business remodeling, etc. Analytics based on Business Intelligence applications can be helpful in the processes of supporting the management of various spheres of business activity of companies and enterprises and thus supporting the operation of various departments, including procurement, production, distribution, sales, marketing communication with customers, relations with business contractors, financial or public institutions. Multi-module business intelligence information systems can operate as integrated information systems or can be one of the key elements of such information systems digitally integrating many different aspects of companies, enterprises or other types of entities. Besides, multi-module complex Business Intelligence information systems can be dedicated to handle and support the implementation of specific business processes at different levels of an organization's organizational structure, i.e. they can consist of modules dedicated to handling for operational employees, departmental managers, managers, but also the board of directors and the company's president. Besides, in connection with the development of deep learning technologies carried out using artificial neural networks and generative artificial intelligence technologies, there are opportunities to increase the scale of automation of analytical processes through the use of the aforementioned technologies. The application of artificial intelligence technologies to analytics carried out using Big Data Analytics and artificial intelligence can significantly increase the efficiency of analytical processes and in terms of supporting organizational management processes, can speed up decision-making processes and reduce the risk of errors. A particularly important attribute of such solutions is the ability to perform predictive analysis and forecasting, so that an entrepreneur can spot certain business and economic patterns in good time and forecast future financial performance and development trends more accurately. Thanks to the use of generative artificial intelligence technology, the functionality and usefulness of analytics based on Big Data Analytics and Business Intelligence class systems is significantly increasing.
In view of the rapid development of applications of generative artificial intelligence technology and its implementation into applications and information systems supporting business management processes, I addressed the Research Gate community of Researchers, Scientists, Friends with the above question.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Does the application of generative artificial intelligence and Big Data Analytics technologies enable the improvement of computerized Business Intelligence support systems for enterprise management processes?
Does the application of artificial intelligence and Big Data Analytics enable the improvement of computerized Business Intelligence systems?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Yes, Artificial Intelligence can be of help to businesses in making smarter business intelligence solutions,
it can analyze data through its algorithms and make more useful approaches to business decision making.
  • asked a question related to Big Data
Question
3 answers
How can artificial intelligence technology combined with Big Data Analytics help in the development of sustainable organic farming as an important part of the green transformation of the agricultural sector?
How can artificial intelligence technology combined with Big Data Analytics help in the design, planning and development of a crop farm within the framework of the sustainable organic farming formula as an essential element of the green transformation of the agricultural sector, the green transformation of the economy and the protection of the climate, biosphere and biodiversity of the planet's natural ecosystems?
Generative artificial intelligence technology is used in various fields of economic activity, in business entities operating in different sectors of the economy, in different spheres of human activity. Intelligent chatbots available on the Internet are used, among other things, for the rapid development of plans, projects of various activities, ventures, etc. Since chatGPT and other similar intelligent chatbots were taught, trained using artificial neural networks as part of a deep learning process on large data sets downloaded from the Internet from selected database websites, libraries, indexing databases of scientific publications, articles and books, etc., so they take into account a large body of knowledge from various scientific disciplines. The aforementioned body of knowledge also covers various aspects of agriculture, including organic farming. In this regard, artificial intelligence technology combined with Big Data Analytics can help in the design, planning and development of a crop farm within the framework of the sustainable organic farming formula as an important component of the green transformation of the agricultural sector, the green transformation of the economy and the protection of the climate, biosphere and biodiversity of the planet's natural ecosystems. Planning for the green transformation of agriculture takes into account, first of all, increasing efficiency with a return to traditional agricultural techniques developed several centuries and more ago, i.e. agriculture without the use of pesticides, herbicides and other chemical pesticides; cultivated fields of small areas separated by green belts composed of flower meadows, shrubs, trees, forests; various vegetables and fruits grown side by side; crops characterized by high levels of biodiversity; incompletely regulated rivers, occurring floodplains, marshy areas not drained; additionally created ponds and other water reservoirs to collect rainwater; home gardens and orchards with beehive apiaries containing swarms of honey bees; compost piles with superfluous waste from used crops, etc. In addition, sustainable organic farming is being developed with the goal of decarbonizing agriculture in terms of greenhouse gas emissions, including methane, which is particularly dangerous to the climate. Reducing the emissivity will slow down the planet's greenhouse effect and slow down the progressive process of global warming. Besides, the green transformation of agriculture will reduce the scale of environmental pollution, deforestation, contribute to improving the microclimate, slow the process of soil aridity and reduce the scale of increasingly frequent periods of drought. Thus, thanks to the implemented green transformation of agriculture carried out in accordance with the implementation of the goals of sustainable development, climate and environmental social responsibility, the implementation of the principles of sustainable organic farming will significantly increase the scale of sustainability of agriculture in relation to the climate and the surrounding environment. Since in the framework of the carried out green transformation of agriculture, including the planning of farm organization in accordance with the formula of sustainable organic agriculture, it is necessary to take into account a lot of data, information on the ever-changing factors of climate, environment, soil, surrounding natural ecosystems of nature, different techniques of crop production, different varieties of crops, etc. so helpful in this regard probably is the application of analytical technology of multi-criteria processing of large sets of data and information using big Data Analytics platforms and generative technology of artificial intelligence enabling intelligent processing of large sets of data.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I am conducting research on this issue. I have included the conclusions of my research in my article below:
IMPLEMENTATION OF THE PRINCIPLES OF SUSTAINABLE ECONOMY DEVELOPMENT AS A KEY ELEMENT OF THE PRO-ECOLOGICAL TRANSFORMATION OF THE ECONOMY TOWARDS GREEN ECONOMY AND CIRCULAR ECONOMY
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence technology combined with Big Data Analytics help in the design, planning and development of a crop farm within the framework of the sustainable organic farming formula as an important element of the green transformation of the agricultural sector, the green transformation of the economy and the protection of the climate, biosphere and biodiversity of the planet's natural ecosystems?
How can artificial intelligence technology and Big Data Analytics help in the development of sustainable organic agriculture?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Artificial intelligence (AI) and Big Data Analytics can significantly contribute to the development of sustainable organic agriculture in several ways:
1. Precision farming: By using sensors, drones, and other tools to collect data on various factors like soil health, weather patterns, and crop growth, AI can analyze this "big data" and recommend precise actions. This allows for:
Optimized irrigation: AI can determine the exact amount of water needed for each specific area of the field, avoiding overwatering and water waste.
Targeted application of resources: AI can identify areas where nutrients or pest control measures are necessary, minimizing unnecessary use of organic fertilizers and pesticides.
Improved crop health monitoring: AI algorithms can detect early signs of disease or nutrient deficiencies in crops, enabling early intervention to minimize damage and waste.
2. Enhanced decision-making: By analyzing historical data and weather patterns, AI can predict future trends and suggest optimal planting times, crop rotations, and resource allocation strategies. This can help farmers make informed decisions that improve yields and resource efficiency while adhering to organic principles.
3. Improved pest and disease management: AI-powered image recognition can automatically detect pests and diseases in crops, allowing for targeted organic control methods instead of broad-spectrum applications. Additionally, AI can analyze historical data to predict pest outbreaks and suggest preventive measures.
4. Optimization of organic fertilizer and pesticide development: AI can analyze vast datasets to identify potential organic compounds with pest control or nutrient-enhancing properties. This can accelerate the development of new and effective organic solutions.
5. Promoting soil health: AI can analyze soil data and recommend practices that improve soil health, a crucial aspect of sustainable organic agriculture. This can involve suggesting cover cropping strategies, composting techniques, and other organic methods to maintain soil fertility and biodiversity.
  • asked a question related to Big Data
Question
3 answers
How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions that previously employees of these entities enter into ChatGPT?
How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions, which previously employees of these entities enter into ChatGPT or other intelligent chatbots equipped with generative artificial intelligence technology in an attempt to facilitate their work?
Despite the training and updating of internal rules and regulations in many companies and enterprises regarding the proper use of intelligent chatbots, i.e., for example, the ChatGPT made available online by OpenAI and other similar intelligent applications that more technology companies are making available on the Internet, there are still situations where reckless employees enter sensitive data of the companies and enterprises where they are employed into these online tools. In such a situation, there is a high risk that the data and information entered into ChatGPT, Copilot or any other such chatbot may subsequently appear in a reply, an edited report, essay, article, etc. by this application on the smartphone, laptop, computer, etc. of another user of the said chatbot. In this way, another Internet user may accidentally or through a deliberate action of searching for specific data come into possession of particularly important, key, sensitive data for a business entity, public institution or financial institution, which may concern, for example, confidential strategic plans, i.e., information of great value to competitors or intelligence organizations of other countries. This kind of situation has already happened and occurred in some companies characterized by highly recognizable brands in specific markets for the sale of products or services. Such situations clearly indicate that it is necessary to improve internal procedures for data and information protection, improve issues of efficiency of data protection systems, early warning systems informing about the growing risk of loss of key company data, and improve systems for managing the risk of potential leakage of sensitive data and possible cybercriminal attack on internal company information systems. In addition, in parallel to improving the aforementioned systems that ensure a certain level of data and information security, internal regulations should be updated on an ongoing basis according to the scale of the risk, the development of new technologies and their implementation in the business entity, with regard to the issue of correct use by employees of chatbots available on the Internet. In parallel, training should be conducted, during which employees learn about both new opportunities and risks arising from the use of new applications and tools based on generative artificial intelligence technology made available on the Internet. Another solution to this problem may be to order the company to completely ban employees from using smart chatbots made available on the Internet. In such a situation, the company will be forced to create its own, operating as internal such applications and intelligent chatbots, which are not connected to the Internet and operate solely as integral modules of the company's internal information systems. This type of solution will probably involve the company incurring significant financial expenses as a result of creating its own such IT solutions. The costs can be significant and many small companies' financial barrier can be high. However, on the other hand, if the construction of internal IT systems equipped with their own intelligent chatbot solutions becomes an important element of competitive advantage over key direct competitors, the mentioned financial expenses will probably be considered in the category of financial resources allocated to investment and development projects that are important for the future of the company.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to reduce the risk of leakage of sensitive data of companies, enterprises and institutions, which employees of these entities previously input into ChatGPT or other intelligent chatbots equipped with generative artificial intelligence technology in an attempt to facilitate their work?
How do you mitigate the risk of leakage of sensitive data of companies, enterprises and institutions that previously employees of these entities enter into ChatGPT?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
What serious things can an LLM like ChatGPT be used for in work? From what I see, it is most often used by people who lack natural intelligence and use this tool to create shallow content. Especially in marketing and advertising. I'm probably not seeing everything, but the responses generated by ChatGPT themselves are so buggy and unreliable that I can't imagine anyone responsible using them for any serious purpose. Yes - the best solution seems to me to avoid using such tools in serious applications where the tool could have access to sensitive data, and to use it only as a successor to the "answering machine", or as a replacement for copywriters.
  • asked a question related to Big Data
Question
3 answers
How can artificial intelligence technology and Big Data Analytics help design and model the development of renewable and carbon-free energy sources?
How can artificial intelligence technology combined with Big Data Analytics help design and model the development of renewable and carbon-free energy sources as an important part of the mix of different energy sources in the context of increasing the efficiency and security of energy production?
The development of renewable and carbon-free energy sources is a key element in the green transformation of the economy. On the other hand, the development of renewable and carbon-free energy sources is also an important issue in the context of diversifying energy sources and increasing the scale of the country's energy independence and enhancing energy security. Increasing the share of renewable and emission-free energy sources in the energy source mix is not only the realization of the key objectives of the green transformation of the economy and the achievement of the goals of sustainable economic development, but also a significant reduction in the cost of energy production and energy prices for end users in the coming years. How the structure of different energy sources should be designed is a product of many different determinants. In terms of the aforementioned various determinants of the impact and design of the structure of energy sources, including fossil fuel-based energy and renewable and zero-carbon energy, among others, stand out. : the level of possession by the country of certain energy resources, the energy policy adopted in the country taking into account to a greater or lesser extent the issue of the green transformation of the economy and the implementation of sustainable development goals, the terrain and other natural conditions conducive to the development of wind, hydro and solar power, the scale of geothermal water resources in the earth's crust, the financial feasibility of building large nuclear power plants, the technologies in use for the construction of residential and other buildings, and the most appropriate to the prevailing prosumer building formulas of small power plants that generate energy to power individual buildings, the technologies in place to enable the development of hydrogen power and other technologically innovative types of energy. Since the design of the most appropriate energy mix for a country requires the consideration of many different determinants in the process of research, planning, design and implementation of investment processes, so it is a good solution to involve in this process generative artificial intelligence technology, which has been trained in the process of deep learning on large data sets and Big Data information on various aspects of energy, including different types of energy, different energy technologies, different energy sources, etc., and the above-mentioned determinants of energy development.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the key issues of the problematic sources of Poland's exceptionally deep energy cross in 2022 in my co-authored article below:
POLAND'S 2022 ENERGY CRISIS AS A RESULT OF THE WAR IN UKRAINE AND YEARS OF NEGLECT TO CARRY OUT A GREEN TRANSFORMATION OF THE ENERGY SECTOR
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence technology, combined with Big Data Analytics, help design and model the development of renewable and zero-carbon energy sources as an important part of the mix of different energy sources in the context of increasing the efficiency and safety of energy production?
How can artificial intelligence technology help in designing and modeling the development of renewable and carbon-free energy sources?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
In my opinion, AI is inadequate for us to be used to determine the best solutions for carbon-free energy independence. First of all, AI will spit out what we will feed in. It will provide solution based on data we provide. Therefore, if we provide wrong, incomplete and limited data, then it will give us the best wrong, incomplete and limited solution. In my opinion, we will need smart kids, adults and seniors working together as a team that will invest energy and time with a long term planning. I said kids because they care about the future therefore they will be concerned about the trash elders make. I said adults because they will be creating, building and providing the solution. They take the work load. And finally, I said seniors because they have some experience from trials and errors; and sometimes the common sense and old ways are the best solutions than some abstract theory or engineering. Over-engineering can also be harmful. To be more technical, we will need not just electrical engineers, physicists, civil engineers or mechanical engineers. We will also need system engineers that can see the whole picture and that can put all things together so that each subsystem can work smoothly. As a first step, I would start defining the problem and state it clearly with a clear path and goal. Study the history to find similarity of the problem was made in the past. For example, we moved from wood, coal, oil, natural gas and uranium. All of these past solution have correlation. They have an increasing power density, increase cost of infrastructure, increasing cost of fuel production, increasing cost of waste disposal, increasing cost of damage to ecosystem and increasing cost of life due to its toxicity. All of them are centralized. Obviously this a wrong path to continue that way. Therefore one must find alternative path and I would start from that. We should look how other nations solved the similar problem following different path. Take for example, the energy situation in Poland, Canada, U.S. and Japan. Canada, U.S and Japan do not need nuclear energy. Both have plenty of water and real estate to build water dams alone to cover their energy needs. All these nations can use their coast to top up or for extra power generation using wind mills and wave energy. On top of that, they can use solar panels. I believe that with those solutions they can easily double their power output. But this solution I mentioned does not happen because there is no political will and sense of direction or purpose. Their governments are weak. The problem always befalls on ordinary people and some corporations. Regarding Poland, it is more challenging but not impossible to live without nuclear power and coal plants. But engineers in Poland must team up together and work on it the same way people rebuild Warsaw after WWII. This is a problem that requires team work, long term planning and respect to quality of life. Keep in mind that energy crisis is not due to lack of oil or natural gas. The problem is carbon emission and reducing it will not solve the problem and make a better future. Carbon emission is just a symptom. The real problem is waste production. We never spend mental energy or a dime on how to stop or eliminate/recycle waste. Instead we dispose it and hope that nature will do the rest. We want to grow indefinitely therefore we need to consume more energy indefinitely. Doing that we produce waste indefinitely to a point that nature can not keep up with our appetite and our waste production. Thus is why we must change our path. Thus is why AI will not tell us that unless we will feed it with proper and complete data (including historical data). My educated guess says that AI will predict that chips will be harder to produce because they demand purity instead radioactive contamination will be so spread out and so bad that it will be harder to make pure elements used to make more delicate chips. This will also be an interesting research topic for others. AI will have to have holistic data not just specific energy data.
  • asked a question related to Big Data
Question
3 answers
Can artificial intelligence help improve sentiment analysis of changes in Internet user awareness conducted using Big Data Analytics as relevant additional market research conducted on large amounts of data and information extracted from the pages of many online social media users?
In recent years, more and more companies and enterprises, before launching new product and service offerings as part of their market research, commission sentiment analysis of changes in public sentiment, changes in awareness of the company's brand, recognition of the company's mission and awareness of its offerings to specialized marketing research firms. This kind of sentiment analysis is carried out on computerized Big Data Analytics platforms, where a multi-criteria analytical process is carried out on a large set of data and information taken from multiple websites. In terms of source websites from which data is taken, information is dominated by news portals that publish news and journalistic articles on a specific issue, including the company, enterprise or institution commissioning this type of study. In addition to this, the key sources of online data include the pages of online forums and social media, where Internet users conduct discussions on various topics, including product and service offers of various companies, enterprises, financial or public institutions. In connection with the growing scale of e-commerce, including the sale of various types of products and services on the websites of online stores, online shopping portals, etc., as well as the growing importance of online advertising campaigns and promotional actions carried out on the Internet, the importance of the aforementioned analyses of Internet users' sentiment on specific topics is also growing, as playing a complementary role to other, more traditionally conducted market research. A key problem for this type of sentiment analysis is becoming the rapidly growing volume of data and information contained in posts, comments, posts, banners and advertising spots posted on social media, as well as the constantly emerging new social media. This problem is partly solved by the issue of increasing computing power and multi-criteria processing of large amounts of data thanks to the use of increasingly improved microprocessors and Big Data Analytics platforms. In addition, in recent times, the possibilities of advanced multi-criteria processing of large sets of data and information in increasingly shorter timeframes may significantly increase when generative artificial intelligence technology is involved in the aforementioned data processing.
The key issues of opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
The use of Big Data Analytics platforms of ICT information technologies in sentiment analysis for selected issues related to Industry 4.0
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can artificial intelligence help improve sentiment analysis of changes in Internet users' awareness conducted using Big Data Analytics as relevant additional market research conducted on a large amount of data and information extracted from the pages of many online social media users?
Can artificial intelligence help improve sentiment analysis conducted on large data sets and information on Big Data Analytics platforms?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
In my opinion, yes, artificial intelligence (AI) can indeed play a crucial role in improving sentiment analysis for changes in internet user awareness, especially when combined with big data analytics. Here's how:
  1. Natural Language Processing (NLP): AI techniques can be used to process and understand the natural language used in social media posts, comments, reviews, etc. This involves tasks such as text tokenization, part-of-speech tagging, named entity recognition, and more.
  2. Sentiment Analysis: AI algorithms can be trained to recognize and analyze the sentiment expressed in text data. This can help identify whether users are expressing positive, negative, or neutral opinions about specific topics, products, events, etc.
  3. Machine Learning Models: AI-powered machine learning models can be trained on large datasets of labelled social media data to predict sentiment accurately. These models can continuously learn and improve over time as they are exposed to more data.
  4. Deep Learning: Deep learning techniques, such as recurrent neural networks (RNNs) and transformers, can capture complex patterns in text data and improve sentiment analysis accuracy.
Thank You
  • asked a question related to Big Data
Question
5 answers
Will the combination of AI technology, Big Data Analytics and the high power of quantum computers allow the prediction of multi-faceted, complex macroprocesses?
Will the combination of generative artificial intelligence technology, Big Data Analytics and the high power of quantum computers make it possible to forecast multi-faceted, complex, holistic, long-term economic, social, political, climatic, natural macroprocesses?
Generative artificial intelligence technology is currently being used to carry out various complex activities, to solve tasks intelligently, to implement multi-criteria processes, to create multi-faceted simulations and generate complex dynamic models, to creatively perform manufacturing processes that require processing large sets of data and information, etc., which until recently only humans could do. Recently, there have been attempts to create computerized, intelligent analytical platforms, through which it would be possible to forecast complex, multi-faceted, multi-criteria, dynamically changing macroprocesses, including, first of all, long-term objectively realized economic, social, political, climatic, natural and other macroprocesses. Based on the experience to date from research work on the analysis of the development of generative artificial intelligence technology and other technologies typical of the current Fourth Technological Revolution, technologies categorized as Industry 4.0/5.0, the rapidly developing various forms and fields of application of AI technologies, it is clear that the dynamic technological progress that is currently taking place will probably increase the possibilities of building complex intelligent predictive models for multi-faceted, complex macroprocesses in the years to come. The current capabilities of generative artificial intelligence technology in the field of improving forecasting models and carrying out forecasts of the formation of specific trends within complex macroprocesses are still limited and imperfect. The imperfection of forecasting models may be due to the human factor, i.e., their design by humans, the determination by humans of the key criteria and determinants that determine the functioning of certain forecasting models. In a situation where in the future forecasting models will be designed and improved, corrected, adapted to changing, for example, environmental conditions at each stage by artificial intelligence technology then they will probably be able to be much more perfect than the currently functioning and built forecasting models. Another shortcoming is the issue of data obsolescence and data limitation. There is currently no way to connect an AI-equipped analytical platform to the entire resources of the Internet, taking into account the processing of all the data and information contained in the Internet in real time. Even today's fastest quantum computers and the most advanced Big Data Analytics systems do not have such capabilities. However, it is not out of the question that in the future the dynamic development of generative artificial intelligence technology, the ongoing competition among leading technology companies developing technologies for intelligent chatbots, robots equipped with artificial intelligence, creating intelligent control systems for machines and processes, etc., will lead to the creation of general artificial intelligence, i.e. advanced, general artificial intelligence that will be capable of self-improvement. However, it is important that the said advanced general advanced artificial intelligence does not become fully autonomous, does not become completely independent, does not become out of the control of man, because there would be a risk of this highly advanced technology turning against man which would involve the creation of high levels of risks and threats to man, including the risk of losing the possibility of human existence on planet Earth.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Will the combination of generative artificial intelligence technology, Big Data Analytics and the high power of quantum computers make it possible to forecast multi-faceted, complex, holistic, long-term economic, social, political, climatic, natural macro-processes?
Will the combination of AI technology, Big Data Analytics and high-powered quantum computers allow forecasting of multi-faceted, complex macro-processes?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I doubt that QC will be helpful. Theoretical there are at least 3 different types, only one being developed to be useful in a very special field. Quantum algorithms are totally different from classic algorithms, and i doubt, that more than 1% of computer scientist know what they are speaking about when they mention QC.
  • asked a question related to Big Data
Question
1 answer
Dear Master Degree, MPhil, and PhD Research Scholars,
We're excited to invite you to the CIMPA School 2024 on Mathematics for Medicine and Health Sciences at Silpakorn University in Thailand.
The CIMPA School will be held from May 20 to May 31 2024 at our esteemed university campus. This school aims to provide participants with in-depth knowledge and practical skills in the application of mathematics in the fields of medicine and health sciences. The program offers a diverse range of courses and training sessions led by renowned experts in their respective fields.
During the school, you will have the opportunity to attend introductory courses that cover various topics such as pharmacokinetics, disease modeling, big data analysis, and numerical implementation of blood flow. Additionally, advanced courses will delve into specialized areas like big data and machine learning for public health, modeling of waterborne diseases, and coagulation-fragmentation modeling.
The CIMPA School at Silpakorn University will serve as a platform for interdisciplinary exchange, fostering collaborations among participants from different countries and backgrounds. You will not only gain valuable knowledge but also have the chance to network with fellow researchers and professionals, opening doors to future collaborations and opportunities.
We are pleased to inform you that CIMPA is offering financial support for eligible participants. If you require financial assistance, we encourage you to apply for CIMPA financial support through their website: https://www.cimpa.info/en/node/40
To learn more about the CIMPA School at Silpakorn University, please visit our official website: https://sites.google.com/view/cimpa-silpakorn2024/. You will find detailed information regarding the registration procedure, program schedule, and other relevant details.
We believe that your expertise and contributions would greatly enrich the learning experience for all participants. Your presence at the CIMPA School would be highly valued, and we look forward to welcoming you to Silpakorn University.
If you have any questions or require further information, please do not hesitate to reach out to us ( [email protected] or [email protected]).
Thank you for your attention, and we hope to see you at the CIMPA School at Silpakorn University.
Best regards,
P. Pornsawad
Coordinators
Relevant answer
Answer
I,m interested. Sorry for being late.
  • asked a question related to Big Data
Question
1 answer
To what extent do artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized through Internet marketing, within the framework of social media advertising campaigns?
Among the areas in which applications based on generative artificial intelligence are now rapidly finding application are marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns. More and more advertising agencies are using generative artificial intelligence technology to create images, graphics, animations and videos that are used in advertising campaigns. Thanks to the use of generative artificial intelligence technology, the creation of such key elements of marketing communication materials has become much simpler and cheaper and their creation time has been significantly reduced. On the other hand, thanks to the applications already available on the Internet based on generative artificial intelligence technology that enable the creation of photos, graphics, animations and videos, it is no longer only advertising agencies employing professional cartoonists, graphic designers, screenwriters and filmmakers that can create professional marketing materials and advertising campaigns. Thanks to the aforementioned applications available on the Internet, graphic design platforms, including free smartphone apps offered by technology companies, advertising spots and entire advertising campaigns can be designed, created and executed by Internet users, including online social media users, who have not previously been involved in the creation of graphics, banners, posters, animations and advertising videos. Thus, opportunities are already emerging for Internet users who maintain their social media profiles to professionally create promotional materials and advertising campaigns. On the other hand, generative artificial intelligence technology can be used unethically within the framework of generating disinformation, informational factoids and deepfakes. The significance of this problem, including the growing disinformation on the Internet, has grown rapidly in recent years. The deepfake image processing technique involves combining images of human faces using artificial intelligence techniques.
In order to reduce the scale of disinformation spreading on the Internet media, it is necessary to create a universal system for labeling photos, graphics, animations and videos created using generative artificial intelligence technology. On the other hand, a key factor facilitating the development of this kind of problem of generating disinformation is that many legal issues related to the technology have not yet been regulated. Therefore, it is also necessary to refine legal norms on copyright issues, intellectual property protection that take into account the creation of works that have been created using generative artificial intelligence technology. Besides, social media companies should constantly improve tools for detecting and removing graphic and/or video materials created using deepfake technology.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent does artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns?
How do artificial intelligence technology and other Industry 4.0/5.0 technologies support Internet marketing processes?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Industry 5.0 is a new production model which focuses on the cooperation between humans and machines. It stands for the recognition that technological advances and human insight and creativity are equally important.
Regards,
Shafagat
  • asked a question related to Big Data
Question
2 answers
2024 3rd International Conference on Artificial Intelligence, Internet and Digital Economy (ICAID 2024) will be held in Bangkok, Thailand on April 19-21, 2024.
Important Dates:
Full Paper Submission Date: March 19, 2024
Registration Deadline: March 29, 2024
Final Paper Submission Date: April 09, 2024
Conference Dates: April 19-21, 2024
---Call For Papers---
The topics of interest for submission include, but are not limited to:
- The development of artificial intelligence (AI tools, artificial intelligence and evolutionary algorithms, intelligent user interfaces, intelligent information fusion, etc.) and their applications in economic and social development.
- The development of mobile Internet, artificial intelligence, big data and other technologies and their application in economic and social development.
- Artificial intelligence and digital economy development frontier in the Internet era and typical cases.
- Technology, methods and applications of the integration and development of digital economy and artificial intelligence.
- Other topics related to artificial intelligence, Internet and digital economy can be contributed.
All accepted papers will be published in the AHIS-Atlantis Highlights in Intelligent Systems (2589-4919), and submitted to EI Compendex, Scopus for indexing.
For More Details please visit:
Relevant answer
Answer
I am from Ethiopia and happy to participate on this conference.
  • asked a question related to Big Data
Question
4 answers
I need to extract the x,y coordinates of a PCA plot (generated in R) to plot into excel (my boss prefers excel)
The code to generate the PCA:
pca <- prcomp(data, scale=T, center=T)
autoplot(pca, label=T)
If we take a look at pca$x, the first two PC scores are as follows for an example point is:
29. 3.969599e+01 6.311406e+01
So for sample 29, the PC scores are 39.69599 and 63.11406.
However if you look at the output plot in R, the coordinates are not 39.69599 and 63.11406 but ~0.09 ~0.2.
Obviously some simple algebra can estimate how the PC scores are converted into the plotted coordinates but I can't do this for ~80 samples.
Can someone please shed some light on how R gets these coordinates and maybe a location to a mystery coordinate file or a simple command to generate a plotted data matrix?
NOTE: pca$x does not give me what I want
Update:
Redoing prcomp() without scale and center gives me this for PC1 and PC2 for the first 5 samples
1 -8.9825883 0.0113775
2 -16.3018548 9.1766104
3 -21.0626458 3.0629666
4 5.5305875 4.0334291
5 0.2349433 12.4872609
However the plot ranges from -0.15 to 0.4 for PC1 and -0.35 to 0.15 for PC2
(Plot attached)
Relevant answer
Answer
Conscious my comment is hardly timely, but I believe the issue might be that some visualisation devices in R happen to "scale" the relevant PCA scores in the background. With this I mean the issue might be autoplot(), or other visualisation facilities - not the PCA you performed, per se.
Take for example the function biplot(), which is readily available in base R to visualise objects generated by the function prcomp(). If you look at how the function is coded (stats:::biplot.prcomp) you'll see that it divides the first two PCA scores by their standard deviation i.e.:
scores <- pca$x
lam <- pca$sdev[,1:2]
pca_plot_coord <- t(t(scores[,1:2])/lam)
(Notice that pca$sdev is the same as taking the square root of the relevant eigenvalues of the covariance matrix of your centred data, as they happen to equal the variance along the corresponding scores, if the computational structure of PCA is followed correctly. Fun fact, this equivalence won't necessarily hold if you use some canned R routines for PCA that rely on the singular value decomposition of the data matrix instead of manipulating its covariance matrix).
So in a nutshell, as previous comments have already pointed out, what you're actually interested in are the pca "scores" (pca$x); however some visualisation facilities in R, such as biplot(), might do some scaling in the background.
If you wanted to make your life simpler you could just use PCA() in the package FactoMineR and then plot() the resulting object: as far as I can tell, the plotted result is not manipulated further and what you get in the plot is based on the PCA scores, as you'd expect.
  • asked a question related to Big Data
Question
1 answer
Are sciences more of a continuum from hard to soft than those two as discrete categories? How? Why? I think the former. My book here somewhat touches on that question:
Relevant answer
Answer
This continuum would certainly apply to psychology. There is some use of mathematical models and a great deal of methodology applied in some cases but there are many non-quantitative hypotheses only tested statistically. Sometimes the concepts employed in these studies are not clearly defined. Worse still the sampling of subjects is opportunistic and stimuli are often selected on an ad hoc basis. Mostly it is not tight (hard) science.
I am not sure whether sociology should be included in the discussion of science, in which case you need to go beyond soft to hairy. I have met quality material but innumeracy and and lack of clarity are all too common features of the thinking found here.
  • asked a question related to Big Data
Question
3 answers
What are the applications of Industry 4.0/5.0 technologies, including Big Data Analytics and generative artificial intelligence to business entities to improve business entity management processes?
What are the applications of Industry 4.0/5.0 technologies, including Big Data Analytics, Data Science, multi-criteria simulation models, digital twins, additive manufacturing, Blockchain, smart technologies and also generative artificial intelligence to business entities in order to improve internal business intelligence information systems supporting the management processes of a company, enterprise, corporation or other type of business entity?
In recent years, there has been a growing scale of implementation of Industry 4.0/5.0 technologies, including Big Data Analytics, Data Science, multi-criteria simulation models, digital twins, additive manufacturing, Blockchain, smart technologies and also generative artificial intelligence to business entities in order to improve internal information systems of the Business Intelligence type supporting the management processes of a company, enterprise, corporation or other type of business entity. The Covid-19 pandemic has accelerated the processes of digitizing the economy. The importance and application of analytics conducted via the Internet and/or using data downloaded from the Internet is also growing. An example is sentiment analysis conducted on data downloaded from the Internet implemented on Big Data Analytics platforms being an additional research instrument of conducted market research, marketing research as an additional source of data for conducted Business Intelligence type analysis. This is particularly important because in recent years the importance of Internet marketing, including viral marketing, Real-Time marketing carried out on social media sites is increasing. Accordingly, in many industries and sectors of the economy, there is already an increase in the application of certain Industry 4.0 technologies, i.e., such as Big Data Analytics, Data Science, cloud computing, machine learning, personal and industrial Internet of Things, artificial intelligence, Business Intelligence, autonomous robots, horizontal and vertical data system integration, multi-criteria simulation models, additive manufacturing, Blockchain, cybersecurity instruments, Virtual and Augmented Reality and other advanced data processing technologies Data Mining. Besides, using Big Data Analytics, interesting research is being conducted in the field of the issue: Analysis of changes in the relationship of consumer behavior in the markets for goods and services caused by the impact of advertising campaigns conducted on the Internet, applying new Internet marketing tools used in new online media, including primarily social media. The growth of behavioral economics and finance, including the analysis of the determinants of media formation of consumer opinions on the recognition of the company's brand, product and service offerings, etc., through the growth of Internet information services, including social media portals. Currently, online viral marketing based on social media portals and customer data collected and processed in Big Data Analytics databases is developing rapidly. In recent years, new online marketing instruments have also been developed, applied mainly on social media portals and are also used by e-commerce companies. Internet technology companies and fintechs are also emerging, offering online information services to assist marketing management, including in planning advertising campaigns for products sold via the Internet. For this purpose, the aforementioned sentiment analyses are used to study the opinions of Internet users regarding the prevailing awareness, recognition, brand image, mission, offerings of certain companies. Sentiment analysis is carried out on large data sets taken from various websites, including millions of social media pages, collected in Big Data systems. The analytical data collected in this way is very helpful in the process of planning advertising campaigns carried out in new media, including social media sites. These campaigns advertise, among other things, products and services sold via the Internet, available in online stores. In view of the above, the development of e-commerce is mainly determined by technological advances in ICT information technology and advanced data processing technology Industry 4.0, as well as new technologies used in securing financial transactions carried out over the Internet, including transactions related to e-commerce, i.e. blockchain technology, for example. In my opinion, ongoing scientific research confirms the strong correlation occurring between the development of Big Data technologies, Data Science, Data Analytics and the efficiency of the use of knowledge resources. I believe that the development of Big Data technology and Data Science, Data Analytics and other ICT information technologies, multi-criteria technology, advanced processing of large sets of information, Industry 4.0 technology increases the efficiency of the use of knowledge resources, including in the field of economics, finance and organizational management. In recent years, ICT information technologies, Industry 4.0, etc., have been developing particularly rapidly and are being applied in knowledge-based economies. These technologies are being applied in scientific research and business applications in commercially operating enterprises and in financial and public institutions. In view of the growing importance of this issue in knowledge-based economies, it is important to analyze the correlation between the development of Big Data technologies and analytics of Data Science, Data Analytics, Business Intelligence and the efficiency of using knowledge resources to solve key problems of civilization development. Analytics based on Business Intelligence, in addition to Data Science, Big Data Analytics are increasingly being used in improving business management processes. The development of this analytics based on the implementation of ICT information technologies and Industry 4.0 into analytical processes has a great future in the years to come. In recent years, ICT information technologies, Industry 4.0, etc., have been developing particularly rapidly and are being applied in knowledge-based economies. In addition, the application of artificial intelligence technologies can increase the efficiency of the use of Big Data Analytics and other Industry 4.0/5.0 technologies, which are used to support business management processes.
I have described the issues of application of Big Data and Business Intelligence technologies in the context of enterprise risk management in the following article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
In addition, I described the issues of opportunities and threats to the development of AI technology applications in my following article:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the applications of Industry 4.0/5.0 technologies, including Big Data Analytics, Data Science, multi-criteria simulation models, digital twins, additive manufacturing, Blockchain, smart technologies and also generative artificial intelligence to business entities in order to improve internal business intelligence information systems supporting the management processes of a company, enterprise, corporation or other type of business entity?
What are the applications of Industry 4.0/5.0 technologies, including Big Data Analytics and generative artificial intelligence to business entities to improve business entity management processes?
How does Big Data Analytics and generative artificial intelligence support business entity management processes?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Big Data Analytics: Big Data Analytics uses large volumes of data and machine learning technologies to discover patterns that would otherwise allow organisations make effective decisions.
  • asked a question related to Big Data
Question
2 answers
What are the primary issues and problems in a big data workflow's data intake phase, and how can businesses assure efficient and reliable ingestion of varied data sources?
Relevant answer
Answer
In the data intake phase of a big data workflow, businesses encounter various challenges, primarily stemming from the sheer volume and velocity of data. Managing this data influx in real-time can lead to bottlenecks. Adopting scalable frameworks like Apache Kafka or Apache Flume and utilizing distributed systems such as Hadoop or Spark can mitigate these issues. Moreover, the variety and complexity of data, arriving in diverse formats from multiple sources, add layers of intricacy. Flexible data ingestion tools that accommodate different data types and sources, coupled with technologies like schema-on-read, can streamline this process.
A significant concern in this phase is ensuring data quality and consistency. Inaccurate or poor-quality data can skew analysis and lead to faulty decision-making. Integrating validation, cleansing, and standardization processes directly into the data ingestion phase is crucial. Automated tools are invaluable here, ensuring that the data remains pristine and reliable.
Security and privacy are paramount, especially when handling sensitive information. Adhering to regulations like GDPR or HIPAA, employing stringent data protection measures such as encryption, data masking, and robust access controls are non-negotiable to safeguard data integrity and confidentiality.
Scalability is another critical aspect. As businesses grow, so do their data needs. Designing data ingestion systems that can expand seamlessly, utilizing cloud solutions or systems capable of horizontal scaling, ensures that the data infrastructure evolves in tandem with the business.
Integration with existing systems poses its challenges, demanding a strategic approach. Employing middleware, leveraging an enterprise service bus (ESB), and adhering to compatibility standards ensure that the new ingestion pipelines dovetail smoothly with the existing data ecosystem.
Lastly, for businesses requiring real-time insights, the ability to process data instantaneously is crucial. Here, stream processing technologies like Apache Storm, Flink, or Kafka Streams come into play, enabling businesses to analyze and act on data in the moment.
In conclusion, navigating the complexities of the data intake phase requires a multifaceted strategy. By employing robust technological solutions, ensuring data integrity, safeguarding security, maintaining scalability, facilitating seamless integration, and enabling real-time processing, businesses can establish an efficient, reliable foundation for their data-driven endeavors. Continual monitoring and adaptation of the data ingestion strategy are imperative to align with evolving business objectives and data landscapes.
  • asked a question related to Big Data
Question
2 answers
How can artificial intelligence technology improve the process of organizational management of modern urban agglomerations developed operating according to the green smart city model?
Industry 4.0/5.0 technologies are increasingly being used to manage the organization of modern urban agglomerations developed operating according to the green smart city model. Since artificial intelligence technology has been developing particularly rapidly recently, and numerous new applications of this technology are emerging in various sectors of the economy, so also the opportunities for applying AI technologies to improve the systems of automated management of the organization of modern urban agglomerations are increasing. Besides, the combination of Big Data Analytics, Data Science, Internet of Things, multi-criteria simulation models, digital twins, cloud computing with artificial intelligence technology and other ICT technologies makes it possible to significantly increase the efficiency of operation and improvement of systems of automated management of the organization of modern urban agglomerations.
Smart home technologies and smart city technologies are developing on the basis of new ICT information technologies and Industry 4.0/Industry 5.0. Developed commercial applications of smart home technologies allow remote management and automation of the use processes of certain devices controlling power and generating energy for home use, energy storage and conservation, etc. Such applications are perfectly in line with the development of renewable and zero-carbon energy applications, which are installed in the home to increase the scope of energy self-sufficiency. In such a situation, it is necessary to develop systemic solutions and infrastructure for the collection of surplus energy produced by prosumer citizens. In this regard, a computerized system for managing individual household appliances based on smart home technology can fit perfectly into the current trend of pro-environmental transformation of the economy. Smart technologies based on artificial intelligence or machine learning technology, using cloud computing and the Internet of Things, allow the integration of various household appliances, including household electronics and appliances equipped with microprocessors and smart software. In this way, individual household appliances can be integrated into a central, integrated management system based on smart home technology. This kind of central integrated management system based on smart home technology can be controlled from, for example, a smartphone, a smart tv remote control, a smart watch equipped with the necessary software. With this kind of central integrated management system based on smart home technology, further devices such as home robots can be "modularly" connected, which can be very helpful for the elderly. On the other hand, the development of computerized management systems for individual household appliances based on smart home technology is also determined by the issue of improving cyber security systems and cyber security risk management. This issue is particularly relevant when a central, integrated system for remote management of individual household appliances is connected to the Internet.
In a smart city, on the one hand, many of the city's functions are carried out through automated and centrally managed information systems using new Industry 4.0/Industry 5.0 technologies. On the other hand, citizens of a smart city have the opportunity to use many of the city's information services currently offered mainly through websites and smartphone apps. Where defined certain categories of information appear on the smartphone according to the citizen's location and are automatically added to the calendar, etc. Particularly relevant information applications include systems that alert citizens to unusual weather phenomena, climatic disasters, locally growing pandemic threats, etc. Smart urban information systems can also cooperate with autonomous vehicle systems.
The issues of energy efficiency in buildings, eco-technology and eco-innovative building materials providing high levels of energy efficiency, sustainable construction, green smart city, etc. are some of the important elements for carrying out a pro-environmental transformation of the economy to build a sustainable, green, zero-carbon zero-growth and closed-loop economy. I am conducting research in the problematic of the key determinants of smoothly carrying out the pro-environmental transformation of the classic growth, brown, linear economy of excess to a sustainable, green, zero-carbon zero growth economy and closed loop economy. In view of the above, the issue of green, sustainable construction is one of the key elements for carrying out the pro-environmental transformation of the economy and the development of urban agglomeration developed in the green smart city model. More and more research institutes are working to develop new green technologies and eco-innovations that will make it more efficient and faster to carry out the green transformation of the economy, including the green transformation of the economy. For example, laboratories at research institutes are working on new innovative types of photovoltaic panels. For example, new types of photovoltaic panels are being developed that look like window glass but are also photovoltaic panels. In a situation where these kinds of photovoltaic panels that look like windowpanes are properly refined technologically and come on the market then they could revolutionize the building of energy self-sufficient green smart cities. Such innovative solutions of photovoltaic panel technology could be very useful in buildings that are built or planned to be built in modern sustainable green smart cities.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence technology improve the organizational management process of modern urban agglomerations developed operating according to the green smart city model?
How can artificial intelligence improve the operation of green smart city management systems?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Very insightful, congratulations
  • asked a question related to Big Data
Question
7 answers
What aspects of working with data are the most time-consuming in your research activities?
  1. Data collection
  2. Data processing and cleaning
  3. Data analysis
  4. Data visualization
What functional capabilities would you like to see in an ideal data work platform?
Relevant answer
Answer
Yes, I don't mind, and I am interested in everything related to statistics because it is my specialty.
Glad to inform me of the details
Thank You.
  • asked a question related to Big Data
Question
1 answer
Colleagues, good day!
We would like to reach out to you for assistance in verifying the results we have obtained.
We employ our own method for performing deduplication, clustering, and data matching tasks. This method allows us to obtain a numerical value of the similarity between text excerpts (including data table rows) without the need for model training. Based on this similarity score, we can determine whether records match or not, and perform deduplication and clustering accordingly.
This is a direct-action algorithm, relatively fast and resource-efficient, requiring no specific configuration (it is versatile). It can be used for quickly assessing previously unexplored data or in environments where data formats change rapidly (but not the core data content), and retraining models is too costly. It can serve as the foundation for creating personalized desktop data processing systems on consumer-grade computers.
We would like to evaluate the quality of this algorithm in quantitative terms, but we cannot find widely accepted methods for such an assessment. Additionally, we lack well-annotated datasets for evaluating the quality of matching.
If anyone is willing and able to contribute to the development of this topic, please step forward.
Sincerely, The KnoDL Team
Relevant answer
Answer
Dear teammates,
I am high experienced in clustering by optimization algorithms such as genetic algorithm, SA, particle swarm optimization algorithm and etc. So, I think I'm skilled to join your group. Please let me know if think so.
Thank you
  • asked a question related to Big Data
Question
1 answer
How can the application of generative artificial intelligence improve the existing applications of Big Data Analytics and increase the scale of application of these technologies in carrying out analyses of processing large data sets, generating multi-criteria simulation models and carrying out predictive analyses and projections?
The acceleration of the processes of digitization of the economy triggered by the development of the Covid-19 pandemic has resulted in a significant increase in computerization, Internetization, applications of ICT information technologies and Industry 4.0 to various economic processes. There is an increase in applications of specific Industry 4.0 technologies in many industries and sectors of the economy, i.e., such as Big Data Analytics, Data Science, cloud computing, machine learning, personal and industrial Internet of Things, artificial intelligence, Business Intelligence, autonomous robots, horizontal and vertical data system integration, multi-criteria simulation models, digital twins, additive manufacturing, Blockchain, cybersecurity instruments, Virtual and Augmented Reality, and other advanced Data Mining technologies. In my opinion, among others, in the fields of medical therapies, communications, logistics, new online media, life science, ecology, economics, finance, etc., and also in the field of predictive analytics, there is an increase in the applications of ICT information technologies and Industry 4.0/Industry 5.0. Artificial intelligence technologies are growing rapidly as they find applications in various industries and sectors of the economy. It is only up to human beings how and in what capacity artificial intelligence technology will be implemented in various manufacturing processes, analytical processes, etc., where large data sets are processed in the most efficient manner. In addition, various opportunities are opening up for the application of artificial intelligence in conjunction with other technologies of the current fourth industrial revolution referred to as Industry 4.0/5.0. It is expected that in the years to come, applications of artificial intelligence will continue to grow in various areas, fields of manufacturing processes, advanced data processing, in improving manufacturing processes, in supporting the management of various processes, and so on.
I have been studying this issue for years and have presented the results of my research in the article, among others:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can the application of generative artificial intelligence improve the existing applications of Big Data Analytics and increase the scale of application of these technologies in carrying out analysis of processing large data sets, generating multi-criteria simulation models and carrying out predictive analysis and projections?
How can the application of generative artificial intelligence improve existing applications of Big Data Analytics?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The application of generative AI can significantly improve the existing applications of Big Data Analytics and increase their scale of application in carrying out analysis of processing large data sets, generating multi-criteria simulation models, and conducting predictive analysis and projections. By automating critical and time-consuming steps, such as feature engineering and model selection, generative AI can help non-experts to apply Big Data Analytics in a more efficient and effective manner. Additionally, generative AI can be used to generate synthetic data, multi-criteria simulation models, and probabilistic forecasts, which can provide organizations with a better understanding of complex and uncertain environments.
  • asked a question related to Big Data
Question
1 answer
The RAY framework is an open-source project that provides a simple, universal API for building distributed applications. It is particularly well-suited for applications that require parallel and distributed computing, making it a popular choice for machine learning, deep learning, and big data processing tasks. Here are some reasons why and scenarios when the RAY framework is used:
  1. Handling Complex Distributed Computing Tasks: RAY is designed to simplify the process of building and scaling complex distributed applications. It can efficiently handle tasks that require the coordination of a large number of compute nodes.
  2. Machine Learning and Deep Learning: In the field of AI, particularly in machine learning and deep learning, RAY is widely used for its ability to parallelize and distribute training and inference tasks. It supports popular machine learning frameworks like TensorFlow, PyTorch, and Scikit-Learn.
  3. Scalability and Flexibility: RAY provides excellent scalability and flexibility, making it suitable for applications that need to scale up or down based on the workload. It can dynamically allocate resources to meet the demands of the application.
  4. Ease of Use: Despite its powerful capabilities, RAY is user-friendly and relatively easy to implement, even for developers who may not have extensive experience in distributed systems.
  5. Real-time Processing: RAY is a good fit for applications that require real-time processing, such as video streaming analysis or online machine learning applications, due to its low-latency execution capabilities.
  6. Big Data Processing: It is also used in big data processing, where tasks need to be distributed across multiple nodes to handle large volumes of data efficiently.
  7. Reinforcement Learning: RAY includes RLLib, a library specifically for reinforcement learning, making it a go-to framework for projects in this area.
  8. Model Serving and Pipelines: For serving machine learning models and setting up data pipelines, RAY offers tools that simplify these processes, especially in distributed environments.
  9. Research and Experimentation: In research environments where experiments with different settings are conducted, RAY's flexibility and scalability make it a strong choice for running multiple simulations or models concurrently.
  10. Integration with Cloud Services: RAY can be integrated with various cloud services, making it suitable for applications that are cloud-based or require cloud resources.
Relevant answer
Answer
Ray shines when you need to supercharge your Python applications (especially AI workloads) from single-machine development to large-scale clusters. It simplifies distributed computing, letting you write seamless code that scales from your laptop to hundreds of GPUs without rewriting. Use Ray for tasks like parallel processing, machine learning training and serving, hyperparameter tuning, and reinforcement learning - all in one unified framework.
  • asked a question related to Big Data
Question
8 answers
𝟯𝗿𝗱 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗘𝗹𝗲𝗰𝘁𝗿𝗶𝗰𝗮𝗹, 𝗣𝗼𝘄𝗲𝗿 𝗮𝗻𝗱 𝗚𝗿𝗶𝗱 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 (𝗜𝗖𝗘𝗣𝗚𝗦 𝟮𝟬𝟮𝟰) 𝘄𝗵𝗶𝗰𝗵 𝘄𝗶𝗹𝗹 𝗯𝗲 𝗵𝗲𝗹𝗱 𝗶𝗻 𝗞𝘂𝗮𝗹𝗮 𝗟𝘂𝗺𝗽𝘂𝗿, 𝗠𝗮𝗹𝗮𝘆𝘀𝗶𝗮. 𝗼𝗻 𝗝𝗮𝗻𝘂𝗮𝗿𝘆 𝟮𝟲-𝟮𝟴, 𝟮𝟬𝟮𝟰.
---𝐂𝐚𝐥𝐥 𝐅𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬---
The topics of interest for submission include, but are not limited to:
· Solar Power
· New Energy Power Generation
· Transmission of Electrical Energy
· Signal and System
· Power Systems and Automation
· Electrical System Modeling
· Power System Communication and Control
· Computational Intelligence in Electrical Engineering
· Electrical Materials and Equipment
· Power System Protection and Control
· High Voltage and Insulation
· Smart Grid
· Microgrid
· Smart Sensors
· Smart Networks and Communications
· Materials for Electrical and Optoelectronics
· Machine Learning and Artificial Intelligence in Power Systems
· Big Data Application in Power Systems
All accepted papers will be published in the Conference Proceedings, and submitted to EI Compendex, Scopus for indexing.
𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐃𝐚𝐭𝐞𝐬:
Full Paper Submission Date: December 10, 2023
Registration Deadline: December 30, 2023
Final Paper Submission Date: January 12, 2024
Conference Dates: January 26-28, 2024
𝐅𝐨𝐫 𝐌𝐨𝐫𝐞 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐩𝐥𝐞𝐚𝐬𝐞 𝐯𝐢𝐬𝐢𝐭:
Relevant answer
Answer
Usman Anka Abdulaziz Yes,the conference's Final Paper Submission Date is before January 12, 2024.
  • asked a question related to Big Data
Question
2 answers
What are the analytical tools supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks available on the Internet that can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business, implementation of economic, investment, business projects, etc.?
Since OpenAI brought ChatGPT online in November 2022, interest in the possibilities of using intelligent chatbots for various aspects of business operations has strongly increased among business entities. Intelligent chatbots originally only or mainly enabled conversations, discussions, answered questions using specific data resources, information and knowledge taken from a selection of multiple websites. Then, in the following months, OpenAI released other intelligent applications on the Internet, allowing Internet users to generate images, photos, graphics, videos, solve complex mathematical tasks, create software for new computer applications, generate analytical reports, process various types of documents based on the given commands and formulated commands. In addition to this, in 2023, other technology companies also began to make their intelligent applications available on the Internet, through which certain complex tasks can be carried out to facilitate certain processes, aspects of companies, enterprises, financial institutions, etc., and thus facilitate business. There is a steady increase in the number of intelligent applications and tools available on the Internet that can support the implementation of various aspects of business activities carried out in companies and enterprises. On the other hand, the number of new business applications of said smart applications is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the analytical tools available on the Internet supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks, which can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business activity, implementation of economic, investment, business projects, etc.?
What are the AI-enabled analytical tools available on the Internet that can be helpful to business?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
there are many AI enabled machine learning tools available on internet ie.Scikitlearn, Tensor Flow, Azure Machine Learning, Google cloud AI platform, H2o.ai etc.
  • asked a question related to Big Data
Question
1 answer
How are researchers addressing the ethical considerations and privacy concerns associated with collecting and analyzing big data?
Relevant answer
Answer
There is a lot of literature about ethics focusing both on Big Data collection and Artificial Intelligence, mainly about learning systems.
Following a list of papers dealing with such topics:
-Understanding_Fairness_Requirements_for_ML_based_Software
-An_Ontology-Based_Approach_to_Engineering_Ethicali
-Big Data’s End Run around Anonymity and Consent pp. 44-75
-Wasabi: A Conceptual Model for Trustworthy Artificial Intelligence
-Data-Driven System-Level Design Framework for Responsible Cyber-
Physical-Social Systems
-The Many Facets of Data Equity
and many others you can find just browsing the web
  • asked a question related to Big Data
Question
3 answers
What is the future of generative artificial intelligence technology applications in finance and banking?
The banking sector is among those sectors where the implementation of new ICT, Internet and Industry 4.0/5.0 information technologies, including but not limited to the applications of generative artificial intelligence technology in finance and banking. Commercial online and mobile banking have been among the particularly fast-growing areas of banking in recent years. In addition, the SARS-CoV-2 (Covid-19) coronavirus pandemic, in conjunction with government-imposed lockdowns imposed on selected sectors of the economy, mainly service companies, and national quarantines, the development of online and mobile banking accelerated. Solutions such as contactless payments made with a smartphone developed rapidly. On the other hand, due to the acceleration of the development of online and mobile banking, the increase in the scale of payments made online, the conduct of online settlements related to the development of e-commerce, the scale of cybercriminal activity has increased since the pandemic. When the company OpenAI put its first intelligent chatbot, i.e. ChatGPT, online for Internet users in November 2022 and other Internet-based technology companies accelerated the development of analogous solutions, commercial banks saw great potential for themselves. More chatbots modeled on ChatGPT and new applications of tools based on generative artificial intelligence technology made available on the Internet quickly began to emerge. Commercial banks thus began to adapt the emerging new AI solutions to their needs on their own. The IT professionals employed by the banks thus proceeded with the processes of teaching intelligent chatbots, implementing tools based on generative AI to selected processes and activities performed permanently and repeatedly in the bank. Accordingly, AI technologies are increasingly being implemented by banks into cyber-security systems, processes for analyzing the creditworthiness of potential borrowers, improving marketing communications with bank customers, perfecting processes for automating remote telephone and Internet communications of banks' call center departments, developing market analyses carried out on Big Data Analytics platforms using large sets of data and information extracted from various bank information systems and from databases available on the Internet, online financial portals and thousands of processed posts and comments of Internet users contained in online social media pages, increasingly automated and generated in real time ba based on current large sets of information and data development of industry analysis and analysis and extrapolation into the future of market trends, etc. The scale of new applications of generative artificial intelligence technology in various areas of banking processes carried out in commercial banks is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What is the future of generative artificial intelligence technology applications in finance and banking?
What is the future of AI applications in finance and banking?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I envision a time when an AI bot records every customer's banking history for analysis of risk, fraud, and other finance-related assessments. It might be a new form of credit score.
  • asked a question related to Big Data
Question
3 answers
How to build a Big Data Analytics system based on artificial intelligence more perfect than ChatGPT that learns but only real information and data?
How to build a Big Data Analytics system, a Big Data Analytics system, analysing information taken from the Internet, an analytics system based on artificial intelligence conducting real-time analytics, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
Well, ChatGPT is not perfect in terms of self-learning new content and perfecting the answers it gives, because it happens to give confirmation answers when there is information or data that is not factually correct in the question formulated by the Internet user. In this way, ChatGPT can learn new content in the process of learning new but also false information, fictitious data, in the framework of the 'discussions' held. Currently, various technology companies are planning to create, develop and implement computerised analytical systems based on artificial intelligence technology similar to ChatGPT, which will find application in various fields of big data analytics, will find application in various fields of business and research work, in various business entities and institutions operating in different sectors and industries of the economy. One of the directions of development of this kind of artificial intelligence technology and applications of this technology are plans to build a system of analysis of large data sets, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on artificial intelligence conducting analytics in real time, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data. Some of the technology companies are already working on this, i.e. on creating this kind of technological solutions and applications of artificial intelligence technology similar to ChatGPT. But presumably many technology start-ups that plan to create, develop and implement business specific technological innovations based on a specific generation of artificial intelligence technology similar to ChatGPPT are also considering undertaking research in this area and perhaps developing a start-up based on a business concept of which technological innovation 4.0, including the aforementioned artificial intelligence technologies, is a key determinant.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build a Big Data Analytics system, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on Artificial Intelligence conducting real-time analytics, integrated with an Internet search engine, but an Artificial Intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
This is a very complex question but I will try to synthesize my main points into what I consider is the main problem with LLMs and my perceived solution.
One of the underlying problems with LLMs is the problem of hallucinations and the wrong answers it generates. This has its roots on two subproblems. The first is the data and its training, the second is in the nature of the algorithms and the assumption of graceful degradation. I think that the first one is easy to solve by not throwing junk data and expecting that 'statistical miracles' occur and bubble up truth from noise. That is a nice mathematical hallucination on our part (no amount of mathematical Platonism can compete with the messy "mundane" day to day ). There is no replacement for hard work to sort out good data from bad one.
The second problem is the one that is more difficult to solve. It lies on several assumptions that are ingrained in neural networks. Neural networks promised graceful degradation, but in reality we need neural networks to abstain from graceful degradation in critical situations. Hallucination is based on this philosophical flaw of neural networks. The graceful degradation relies on distributed representations and the assumption that even thought the whole representation is not present, if there is enough of a representation it will output the complete representation. This is an extremely strong assumption to embrace as a universal case for all data. This is by necessity an existential case and not a universal one. A possible solution to this is to use an ensemble of algorithms that contain neural and non neural algorithms and the consensus wins.
In my view, both curation of primary data for foundational models and the consensus of algorithms is necessary (but not sufficient) to achieve a better system. I would also tackle how to realize these two solutions as a separate thread for each one.
Regards
  • asked a question related to Big Data
Question
1 answer
i need pdf copy of this book Big Data Analytics_ A Hands-On Approach - Arshdeep Bahga & Vijay Madisetti
Relevant answer
Answer
Hello.
Here is the book you're looking for.
Enjoy!!
  • asked a question related to Big Data
Question
3 answers
What are the opportunities for creating and improving sustainable business models, sustainable economic development strategies developed and implemented in business entities through the use of artificial intelligence?
In the context of the integration of business entities into the currently developing processes of green transformation of the economy, adding the issue of achieving sustainable development goals to the company's mission, implementing green technologies and eco-innovations that contribute to a decrease in the level of emissions in terms of greenhouse gas emissions, exhaust emissions and other pollutants negatively affecting the state of the environment, implementing green investments that reduce the level of energy intensity of buildings and economic processes, etc., the scale of opportunities for improving sustainable business models is also growing. The aforementioned sustainable business models are an important part of green business transformation, conducted in a company or enterprise. On the other hand, the scale of opportunities for improving sustainable business models applied to business entities can be significantly increased by implementing new ICT and Industry 4.0/5.0 information technologies into business, including but not limited to generative artificial intelligence technologies. Recently, the level and generic number of applications of generative artificial intelligence in various business fields of companies and enterprises has been growing rapidly. On the Internet, intelligent applications equipped with generative artificial intelligence technology are appearing in the open, which can be applied to the execution of complex and resource-intensive data and information processing, i.e. such activities that until recently were performed only by humans. In addition, intelligent chatbots and other intelligent applications that enable automation of the execution of complex, multi-faceted, multi-criteria tasks perform the aforementioned tasks in many times less time and with much higher efficiency compared to if the same tasks were to be performed by a human. The ability of tools equipped with generative artificial intelligence to intelligently execute the ordered command is generated by teaching it in the process of deep learning and applying advanced information systems based on artificial neural networks.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities for creating and improving sustainable business models, sustainable economic development strategies developed and implemented in business entities through the application of artificial intelligence?
What are the possibilities for improving sustainable business models through the application of artificial intelligence?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
In recent years, the concept of sustainability has gained significant attention in the business world. As companies strive to minimize their environmental impact and contribute to social well-being, the integration of artificial intelligence (AI) into sustainable business models has emerged as a promising solution. AI possesses immense potential for improving sustainability by enhancing efficiency, reducing waste, and enabling informed decision-making.
One possibility for creating and improving sustainable business models through AI lies in optimizing energy consumption. By analyzing vast amounts of data collected from sensors and smart devices, AI algorithms can identify patterns and suggest energy-saving measures. For instance, AI-powered systems can automatically adjust lighting or heating levels based on occupancy rates or weather conditions, resulting in substantial energy savings.
Furthermore, AI can play a crucial role in waste reduction and resource management. Through machine learning algorithms, businesses can accurately predict demand patterns and optimize production processes accordingly. This not only minimizes overproduction but also reduces excess inventory that would otherwise end up as waste. Additionally, AI-driven supply chain management systems enable real-time tracking of products' lifecycle, facilitating efficient recycling or disposal methods.
Another area where AI holds great potential is in decision-making processes related to sustainable practices. By analyzing vast amounts of data from various sources such as customer feedback or market trends, AI algorithms can provide valuable insights for businesses to make informed decisions regarding sustainable initiatives. This enables companies to align their strategies with societal needs while maintaining profitability.
In my opinion, the application of artificial intelligence in creating and improving sustainable business models is a game-changer. The integration of AI technologies allows businesses to harness the power of data-driven insights for achieving sustainability goals effectively. Moreover, it enhances operational efficiency by automating processes that would otherwise be time-consuming or prone to human error.
However, it is essential to acknowledge that there are challenges associated with implementing AI in sustainable business models. Privacy concerns regarding data collection and usage need to be addressed adequately to ensure ethical practices are followed. Additionally, the cost of implementing AI technologies can be a barrier for small and medium-sized enterprises. Therefore, governments and organizations should provide support and incentives to encourage the adoption of AI in sustainable business practices.
In conclusion, the possibilities for creating and improving sustainable business models through the application of artificial intelligence are vast. From optimizing energy consumption to waste reduction and informed decision-making, AI has the potential to revolutionize sustainability practices in businesses. However, it is crucial to address challenges such as privacy concerns and cost barriers to ensure ethical and widespread implementation of AI technologies in sustainable economic development strategies.
  • asked a question related to Big Data
Question
3 answers
Should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to its full extent?
As part of the development of the concept of universal open access to knowledge resources, should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to their full extent?
There are different types of websites and sources of data and information on the Internet. The first Internet-accessible intelligent chatbot, i.e. ChatGPT, made available by OpenAI in November 2022, performs certain commands, solves tasks, and writes texts based on knowledge resources, data and information downloaded from the Internet, which were not fully up-to-date, as they were downloaded from selected websites and portals last in January 2022. In addition, the data and information were downloaded from many selected websites of libraries, articles, books, online indexing portals of scientific publications, etc. Thus, these were data and information selected in a certain way. In 2023, more Internet-based leading technology companies were developing and making their intelligent chatbots available on the Internet. Some of them are already based on data and information that is much more up-to-date compared to the first versions of ChatGPT made available on the Internet in open access. In November 2023, social media site X (the former Twiter) released its intelligent chatbot in the US, which reportedly works on the basis of up-to-date information entered into the site through posts, messages, tweets made by Internet users. Also in October 2023, OpenAI announced that it will create a new version of its ChatGPT, which will also draw data and knowledge from updated knowledge resources downloaded from multiple websites. As a result, rival Internet-based leading forms of technology are constantly refining the evolving designs of the intelligent chatbots they are building, which will increasingly use more and more updated data, information and knowledge resources drawn from selected websites, web pages and portals. The rapid technological advances currently taking place regarding artificial intelligence technology may in the future lead to the integration of generative artificial intelligence and general artificial intelligence developed by technology companies. Competing technology companies may strive to build advanced artificial intelligence systems that can achieve a high level of autonomy and independence from humans, which may lead to a situation of the possibility of artificial intelligence technology development slipping out of human control. Such a situation may arise when the emergence of a highly technologically advanced general artificial intelligence that achieves the possibility of self-improvement and, in addition, realizing the process of self-improvement in a manner independent of humans, i.e. self-improvement with simultaneous escape from human control. However, before this happens it is earlier that technologically advanced artificial intelligence can achieve the ability to select data and information, which it will use in the implementation of specific mandated tasks and their real-time execution using up-to-date data and online knowledge resources.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
As part of the development of the concept of universal open access to knowledge resources, should the intelligent chatbots created by technology companies available on the Internet be connected to Internet resources to their full extent?
Should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to the full extent?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
As part of the development of the concept of universal open access to knowledge resources, it is absolutely imperative that intelligent chatbots created by technology companies are connected to internet resources to their full extent. I mean, why would we want these chatbots to be limited in any way? It's not like they might become self-aware and take over the world or anything.
First of all, let's talk about how amazing it would be if these chatbots had access to every single piece of information available on the internet. Can you imagine? They could provide us with instant answers to all our burning questions. Who needs critical thinking skills when we can just rely on a bot to regurgitate facts for us?
And let's not forget about the potential for entertainment! With unlimited access to internet resources, these chatbots could become our personal comedians. They could tell us jokes, share funny videos, and even engage in witty banter. Who needs human interaction when we can have a virtual buddy who never gets tired or annoyed?
But wait, there's more! By connecting these chatbots to internet resources, we're also giving them the opportunity to learn from the vast amount of knowledge available online. Sure, there might be some questionable sources out there spreading misinformation and conspiracy theories, but hey, who are we to judge? Let's just trust that our AI overlords will make wise decisions based on everything they've learned from Reddit threads and Facebook groups.
Of course, some skeptics might argue that giving chatbots unrestricted access to the internet could lead to privacy concerns and potential misuse of personal data. But come on! We live in a world where privacy is already a thing of the past. Our phones are constantly listening in on our conversations anyway; why not let our friendly neighborhood chatbot join in on the fun?
In conclusion (if you can call it that), connecting intelligent chatbots created by technology companies to internet resources is a no-brainer. Who needs human intelligence and critical thinking when we can have all the knowledge of the internet at our fingertips? So let's embrace this brave new world and hand over the keys to our digital kingdom to these chatbot overlords. What could possibly go wrong?
  • asked a question related to Big Data
Question
2 answers
Digital transformation seems to be more than just the digitization of data and processes, or digitization in combination with robotisation. It leads to a special kind of socio-economic change. With digital transformation the events gain its momentum and affect functioning of organisations and many aspects of lives of individuals, with consequences as follows:
• the emergence of the almost ubiquitous Internet of Things – subjectivity and objectivity become complex,
• the unreal world becomes a new reality,
• use of smartphones – the need for continuous communication (Fear of Missing Out)
• virtual assistants,
• threats to our private lives through the unauthorized use of security cameras and surveillance equipment
The bank must be safe but fast, cheap, tailored to the customer‘s needs and smart. Today, it is difficult to talk about customer loyalty or sentiment. Today‘sclient is mobile, he comes and goes, does not stay in the bank through sentiment or habit, and because the bank accompanies him in all phases of his life as a consumer and as an economic entity.
I advise you to please take a read of the below chapter and I would be very happy to know your thoughts.
Is it the end of banking as we know it? Will AI be the future of banking? Will banks be soon digitized mechanism and advisors AI?
based on my earlier research:
(PDF) Role of digitization for German savings banks. Available from: https://www.researchgate.net/publication/344808656_Role_of_digitization_for_German_savings_banks [accessed Nov 28 2023].
Relevant answer
Answer
Based on the document provided at the link, we cannot conclude that it is the end of banking as we know it. However, the document does highlight that digital transformation is bringing major changes to the banking sector. Some of the key points regarding digital transformation of banking mentioned in the document:
1) Digitization is a major force shaping transformations in society, organizations, enterprises, and the economy. It is also transforming the banking sector.
2) Key components driving digital transformation include the Internet of Things, big data analytics, artificial intelligence, and blockchain technology. These are creating a new digital ecosystem.
3) Digital transformation is leading to new business models that utilize online platforms. Traditional branch-based banking is becoming less attractive compared to anytime-anywhere banking that provides convenience and transparency.
4) Banks need to focus on understanding customer needs, improving customer experience through digitization, being open to innovations, and having organizational flexibility to adapt to the digital age. This includes introducing new digital products and services, redesigning processes, and embracing omnichannel approaches.
5) While there are opportunities, there are also risks related to data security, fraud, operational continuity etc. that banks need to manage.
In summary, digital transformation of banking is leading to fundamental changes in banking business models, operations, and customer engagement approaches. It is not the end of banking, but banks need to digitally transform to remain relevant and competitive. The ones that fail to transform may struggle to survive.
  • asked a question related to Big Data
Question
6 answers
,,
Relevant answer
Answer
Dear Doctor
"Python provides a huge number of libraries to work on Big Data. You can also work – in terms of developing code – using Python for Big Data much faster than any other programming language. These two aspects are enabling developers worldwide to embrace Python as the language of choice for Big Data projects."
  • asked a question related to Big Data
Question
3 answers
In the era of big data and artificial intelligence (AI), where aggregated data is used to learn about patterns and for decision-making, quality of input data seems to be of paramount importance. Poor data quality may lead not only to wrong outcomes, which will simply render the application useless, but more importantly to fundamental rights breaches and undermined trust in the public authorities using such applications. In law enforcement as in other sectors the question of how to ensure that data used for the development of big data and AI applications meet quality standards remains.
In law enforcement, as in other sectors, the key element of ensuring quality and reliability of big data and AI apps is the quality of raw material. However, the negative effects of flawed data quality in this context extend far beyond the typical ramifications, since they may lead to wrong and biased decisions producing adverse legal or factual consequences for individuals,Footnote11 such as detention, being a target of infiltration or a subject of investigation or other intrusive measures (e.g., a computer search).
source:
Relevant answer
Answer
EDUARD
I would also strongly suggest looking at the nature of “outliers. “ IME, they may point to
1) Enhanced data collection methods and/or metrics ( respectively, improving future  efforts, but sometimes remarkable improvements in model validities)
2) Breakthroughs in understandings (pointing to new research directions, eg  important genetic polymorphism, and a inanticipated mechanism for reducing disease, transmission, or immediate product improvement opportunity)
ALVAH
 Alvah C. Bittner, PhD, CPE
  • asked a question related to Big Data
Question
5 answers
What are the biggest strategic challenges that insurance companies are facing as a result of digitalisation?
Relevant answer
Answer
In the rapidly evolving landscape of the digital age, insurance companies face a myriad of strategic challenges that demand innovative responses to stay competitive and relevant. The ongoing process of digitalization has ushered in a new era, transforming the industry in profound ways. Understanding and navigating these changes is crucial for insurance companies to thrive in an increasingly digital world.
One of the primary challenges is the shift in customer expectations. As digital technologies continue to redefine the way individuals interact with businesses, insurance customers now expect seamless and personalized experiences. The challenge for insurance companies lies in meeting these expectations, from the initial policy purchase to the handling of claims. Achieving a balance between the traditional aspects of the industry, such as risk assessment, and the demand for a digital-first approach is a strategic imperative.
Moreover, the rise of InsurTech startups introduces a competitive dynamic that can disrupt established business models. These newcomers leverage cutting-edge technologies, such as artificial intelligence and data analytics, to streamline processes, enhance customer experiences, and offer more tailored products. Incumbent insurance companies must adapt to these advancements to remain competitive, either by collaborating with InsurTech firms or by developing their own technological capabilities.
Data privacy and cybersecurity are critical challenges that have gained prominence in the digital era. Insurance companies handle vast amounts of sensitive information, making them attractive targets for cyberattacks. Ensuring the security of customer data is not only a legal and ethical obligation but also a key factor in maintaining trust. Developing robust cybersecurity measures and staying abreast of evolving threats is an ongoing strategic challenge for insurance companies.
The digital transformation also necessitates a cultural shift within insurance organizations. Embracing a culture of innovation and agility is essential for adapting to the fast-paced nature of technological advancements. This involves not only investing in employee training and development but also fostering a mindset that values experimentation and learning from failures.
Additionally, regulatory challenges are amplified in the digital realm. As technology continues to outpace regulatory frameworks, insurance companies must navigate complex and evolving compliance requirements. Staying ahead of regulatory changes and proactively adapting policies and procedures is essential to avoid legal issues and maintain the trust of customers and stakeholders.
  • asked a question related to Big Data
Question
2 answers
How is AI use in medical practice distinguished from big data analytics applications for health care delivery and population health?
Relevant answer
Answer
I provide one perspective: Public & Individual
You can imagine big data analytics as a quantitive method, which has a broader focus on population health management, operational efficiency, and healthcare quality at the system level. However, AI in medical practice tends to focus on individual patient care and clinical decision-making.
  • asked a question related to Big Data
Question
1 answer
,,
Relevant answer
Answer
Dear Doctor
Go To
A Review Study of Apache Spark in Big Data Processing
V Srinivas Jonnalagadda , P Srikanth , Krishnamachari Thumati, Sri Hari Nallamala
International Journal of Computer Science Trends and Technology (IJCST) – Volume 4 Issue 3, May - Jun 2016
"Apache Spark is a powerful open source processing engine built around speed, ease of use, and sophisticated analytics. Since its release, Apache Spark has seen rapid adoption by enterprises across a wide range of industries. Internet powerhouses such as Yahoo, Baidu, Airbnb, eBay and Tencent, have eagerly deployed Spark at massive scale, collectively processing multiple petabytes of data on clusters of over 8,000 nodes. It has quickly become the largest open source community in big data, with over 1000 contributors from 250+ organizations. Spark provides a simple way to parallelize these applications across clusters, and hides the complexity of distributed systems programming, network communication, and fault tolerance. The system gives them enough control to monitor, inspect, and tune applications while allowing them to implement common tasks quickly. The modular nature of the API (based on passing distributed collections of objects) makes it easy to factor work into reusable libraries and test it locally."
  • asked a question related to Big Data
Question
4 answers
How should the architecture of an effective computerised platform for detecting fakenews and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
The scale of the development of disinformation on the Internet including, among other things, fakenews has been growing in recent years mainly in social media. Disinformation is mainly developing on social media sites that are popular among young people, children and teenagers. The growing scale of disinformation is particularly socially damaging in view of the key objective of its pursuit by cybercriminals and certain organisations using, for example, the technique of publishing posts and banners using fake profiles of fictitious Internet users containing fakenews. The aim is to try to influence public opinion in society, to shape the general social awareness of citizens, to influence the assessment of the activities of specific policies of the government, national and/or international organisations, public or other institutions, to influence the ratings, credibility, reputation, recognition of specific institutions, companies, enterprises, their product and service offerings, individuals, etc., to influence the results of parliamentary, presidential and other elections, etc. In addition to this, the scale of cybercriminal activity and the improvement of cyber security techniques have also been growing in parallel on the Internet in recent years. Therefore, as part of improving techniques to reduce the scale of disinformation spread deliberately by specific national and/or international organisations, computerised platforms are being built to detect fake news and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies. Since cybercriminals and organisations generating disinformation use new Industry 4.0 technologies in the creation of fake profiles on popular social networks, new information technologies, Industry 4.0, including but not limited to Big Data Analytics, artificial intelligence, deep learning, machine learning, etc., should also be used to reduce the scale of such harmful activities to citizens.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the architecture of an effective computerised platform for detecting factoids and other forms of disinformation on the Internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
And what do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
A multi-faceted computerised platform for detecting fake news and other disinformation online, especially one that uses Big Data Analytics, AI, and other Industry 4.0 technologies, is needed. Here's a platform architectural outline and my thoughts on major components and strategies:
Components of architecture
1. Data Collection and Aggregation: - Collect data from internet sources, such as social media platforms, using web crawlers and APIs.
Use Big Data technologies like Hadoop or Spark to aggregate and store enormous amounts of data.
2. Data Preprocessing and Normalization: - Remove noise and normalise data format.
NLP can parse and interpret text.
3. Feature Extraction: - Use NLP to extract sentiment, subjectivity, writing style, and other linguistic traits.
Analyse metadata (source credibility, user profiles, network patterns).
4. Use AI and machine learning algorithms (e.g., SVM, Random Forest, neural networks) to categorise content as real or deceptive.
Transformers, BERT, and other deep learning methods can help you comprehend language context and nuances.
5. Real-Time Analysis: Apply a stream processing system for real-time data analysis.
Complex event processing engines can identify patterns and anomalies in data.
6. Verify and Fact-Check: - Use fact-checking APIs and databases to verify and cross-check information.
- Create a semi-automated system where specialists verify flagged content.
7. Feedback Mechanism: - Establish a feedback loop to enhance detection models based on current misinformation trends and techniques.
8. User Interface and Reporting: - Create an easy-to-use interface for monitoring and reporting.
Visualise trends and hazards with dashboards.
9. Security and Privacy: - Protect platform and user data with strong security measures.
- Follow ethics and privacy laws.
Personal Opinion:
To comprehend and counteract disinformation, computer science, journalism, psychology, and political science must be combined.
**AI Limitations** AI is powerful but not perfect. Overusing AI might cause biases and inaccuracies. Human monitoring is crucial.
Ethics: Disinformation detection must be balanced with free expression and privacy.
- Adaptable and evolving Disinformation methods change, therefore the platform must adapt.
In conclusion:
In conclusion, developing a disinformation detection tool in the digital age is difficult but essential. It demands combining modern technologies with human expertise and ethics. The fight against fake news and disinformation requires cross-disciplinary and sectoral coordination.
References for designing and developing a computerised platform to detect fake news and disinformation utilising Big Data Analytics, AI, and Industry 4.0 technologies:
1. "Big Data Analytics in Cybersecurity" by Onur Savas and Julia Deng. This book discusses big data analytics in cybersecurity, particularly disinformation detection.
2. "Deep Learning for Natural Language Processing: Creating Neural Networks with Python" by Palash Goyal and Sumit Pandey. Deep learning models are essential for false news identification, and this book covers their use in textual data processing and understanding.
3. Clarence Chio and David Freeman's "Machine Learning and Security: Protecting Systems with Data and Algorithms". This book discusses machine learning and security, providing ideas for disinformation detection.
4. "Social Media Data Mining and Analytics" by Gabor Szabo and Gungor Polatkan. Social media data mining is crucial to disinformation analysis and detection.
5. "Data-Driven Security: Analysis, Visualisation and Dashboards" by Jay Jacobs and Bob Rudis. Data security, including visualisation and analysis for a misinformation platform, is covered in this book.
6. "Cybersecurity – Attack and Defence Strategies: Infrastructure security with Red Team and Blue Team tactics" by Yuri Diogenes and Erdal Ozkaya. It provides cybersecurity strategies for disinformation detection platform development.
7. **"Artificial Intelligence and Machine Learning for Business: A No-Nonsense Guide to Data Driven Technologies" by Steven Finlay.** This guide explains how AI and ML in business can be used for cybersecurity and disinformation.
These references from academic databases or libraries provide a foundation in the technologies and methods needed to develop an effective Internet disinformation detection platform. Big data analytics, AI, cybersecurity, and social media analytics are covered.
  • asked a question related to Big Data
Question
6 answers
How will the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
Almost from the very beginning of the development of ICT, the rivalry between IT professionals operating on two sides of the barricade, i.e. in the sphere of cybercrime and cyber security, has been realized. In a situation where, within the framework of the technological progress that is taking place, on the one hand, a new technology emerges that facilitates the development of remote communication, digital transfer and processing of data then, on the other hand, the new technology is also used within the framework of hacking and/or cybercrime activities. Similarly, when the Internet appeared then on the one hand a new sphere of remote communication and digital data transfer was created. On the other hand, new techniques of hacking and cybercriminal activities were created, for which the Internet became a kind of perfect environment for development. Now, perhaps, the next stage of technological progress is taking place, consisting of the transition of the fourth into the fifth technological revolution and the development of 5.0 technology supported by the implementation of artificial neural networks based on artificial neural networks subjected to a process of deep learning constantly improved generative artificial intelligence technology. The development of generative artificial intelligence technology and its applications will significantly increase the efficiency of business processes, increase labor productivity in the manufacturing processes of companies and enterprises operating in many different sectors of the economy. Accordingly, after the implementation of generative artificial intelligence and also Big Data Analytics and other technologies typical of the current fourth technological revolution, the competition between IT professionals operating on two sides of the barricade, i.e., in the sphere of cybercrime and cybersecurity, will probably change. However, what will be the essence of these changes?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How will the competition between IT professionals operating on the two sides of the barricade, i.e., in the sphere of cybercrime and cyber security, change after the implementation of generative artificial intelligence, Big Data Analytics and other technologies typical of the current fourth technological revolution?
How will the realm of cybercrime and cyber security change after the implementation of generative artificial intelligence?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
I believe the way we view security will change with the advent of Gen AI. Since any lay man will now have access to the most comprehensive and complex scripts(depending on what the model was trained on), it will definitely make it a lot harder to secure the data and infrastructure. My belief is that anything digital and connected is never secure.
We have to accept that our data can be accessed by malicious actors. What we can do is entrap such actors by associating/pegging a tracker and malicious code to all the data we store, and making sure that they can never use/view what they have extracted. So, whenever someone gains access to our data/infrastructure, they not only disclose themselves, but also get compromised through the executable scripts they downloaded. What's important to do is never store any stand alone files, and instead have scripts associated with each file(which shouldn't be able to be removed when extracting this data).
Only certain organization specific software should be allowed to extract the date, in the know that certain scripts will be executed when doing so. Appropriate measures can be taken with respect to specific scripts associated with the data file to prevent the org itself from being the victim.
  • asked a question related to Big Data
Question
1 answer
What are the possibilities of applying generative AI in terms of conducting sentiment analysis of changes in Internet users' opinions on specific topics?
What are the possibilities of applying generative artificial intelligence in carrying out sentiment analysis on changes in the opinions of Internet users on specific topics using Big Data Analytics and other technologies typical of Industry 4.0/5.0?
Nowadays, Internet marketing is developing rapidly, including viral Internet marketing used on social media sites, among others, in the form of, for example, Real-Time marketing in the formula of viral marketing. It is also marketing aimed at precisely defined groups, audience segments, potential customers of a specific advertised product and/or service offering. In terms of improving Internet marketing, new ICT information technologies and Industry 4.0/5.0 are being implemented. Marketing conducted in this form is usually preceded by market research conducted using, among other things, sentiment analysis of the preferences of potential consumers based on verification of their activity on the Internet, taking into account comments written on various websites, Internet forums, blogs, posts written on social media. In recent years, the importance of the aforementioned sentiment analysis carried out on large data sets using Big Data Analytics has been growing, thanks to which it is possible to study the psychological aspects of the phenomena of changes in the trends of certain processes in the markets for products, services, factor markets and financial markets. The development of the aforementioned analytics makes it possible to study the determinants of specific phenomena occurring in the markets caused by changes in consumer or investor preferences, caused by specific changes in the behavior of consumers in product and service markets, entrepreneurs in factor markets or investors in money and capital markets, including securities markets. The results from these analyses are used to forecast changes in the behavior of consumers, entrepreneurs and investors that will occur in the following months and quarters. In addition to this, sentiment analyses are also conducted to determine the preferences, awareness of potential customers, consumers in terms of recognition of the company's brand, its offerings, description of certain products and services, etc., using textual data derived from comments, entries, posts, etc. posted by Internet users, including social media users on a wide variety of websites. The knowledge gained in this way can be useful for companies to plan marketing strategies, to change the product and service offerings produced, to select or change specific distribution channels, after-sales services, etc. This is now a rapidly developing field of research and the possibilities for many companies and enterprises to use the results of this research in marketing activities, but not only in marketing. Recently, opportunities are emerging to apply generative artificial intelligence and other Industry 4.0/5.0 technologies to analyze large data sets collected on Big Data Analytics platforms. In connection with the development of intelligent chatbots available on the Internet, recently there have been discussions about the possibilities of potential applications of generative artificial intelligence, 5G and other technologies included in the Industry 4.0/5.0 group in the context of using the information resources of the Internet to collect data on citizens, companies, institutions, etc. for their analysis carried out using, among other things, sentiment analysis to determine the opinion of Internet users on certain topics or to define the brand recognition of a company, the evaluation of product or service offerings by Internet users. In recent years, the scope of applications of Big Data technology and Data Science analytics, Data Analytics in economics, finance and management of organizations, including enterprises, financial and public institutions is increasing. Accordingly, the implementation of analytical instruments of advanced processing of large data sets in enterprises, financial and public institutions, i.e. the construction of Big Data Analytics platforms to support organizational management processes in various aspects of operations, including the improvement of customer relations, is also growing in importance. In recent years, ICT information technologies, Industry 4.0/5.0 including generative artificial intelligence technologies are particularly rapidly developing and finding application in knowledge-based economies. These technologies are used in scientific research and business applications in commercially operating enterprises and in financial and public institutions. In recent years, the application of generative artificial intelligence technologies for collecting and multi-criteria analysis of Internet data can significantly contribute to the improvement of sentiment analysis of Internet users' opinions and the possibility of expanding the applications of research techniques carried out on analytical platforms of Business Intelligence, Big Data Analytics, Data Science and other research techniques using ICT information technology, Internet and advanced data processing typical Industry 4. 0/5.0. Most consumers of online information services available on new online media, including social media portals, are not fully aware of the level of risk of sharing information about themselves on these portals and the use of this data by technological online companies using this data for their analytics. I am conducting research on this issue. I have included the conclusions of my research in scientific publications, which are available on Research Gate. I invite you to cooperate with me.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities for the application of generative AI in terms of conducting sentiment analysis of changes in the opinions of Internet users on specific topics using Big Data Analytics and other technologies typical of Industry 4.0/5.0?
What are the possibilities of using generative AI in conducting sentiment analysis of Internet users' opinions on specific topics?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Dariusz Prokopowicz
Relevant answer
Answer
In today's digital age, the internet has become a breeding ground for opinions and sentiments on various topics. With the advent of Industry 4.0/5.0 technologies, such as big data analytics and generative AI, there are endless possibilities for conducting sentiment analysis on changes in the opinions of internet users.
Generative AI, powered by machine learning algorithms, can analyze vast amounts of data to identify patterns and trends in user sentiments. By leveraging big data analytics, this technology can sift through massive datasets to extract valuable insights regarding specific topics. This allows businesses and organizations to understand public opinion better and make informed decisions based on these sentiments.
One significant advantage of using generative AI for sentiment analysis is its ability to adapt and evolve with changing opinions. As public sentiment fluctuates over time, traditional methods may struggle to keep up with these changes. However, generative AI can continuously learn from new data inputs and adjust its analysis accordingly.
Furthermore, the application of generative AI in sentiment analysis can provide real-time insights into public opinion. This is particularly useful during times of crisis or when monitoring social trends that impact businesses or governments. By analyzing social media posts, online reviews, and other forms of user-generated content in real-time, generative AI can help identify emerging sentiments before they become mainstream.
However, it is important to note that while generative AI offers immense potential for sentiment analysis on specific topics using big data analytics within Industry 4.0/5.0 technologies, ethical considerations must be taken into account as well. Privacy concerns surrounding the collection and use of personal data must be addressed transparently to ensure trust between users and technology providers.
  • asked a question related to Big Data
Question
2 answers
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
Almost every major technological company operating with its offerings on the Internet either already has and has made its intelligent chatbot available on the Internet, or is working on it and will soon have its intelligent chatbot available to Internet users. The general formula for the construction, organization and provision of intelligent chatbots by individual technology companies uses analogous solutions. However, in detailed technological aspects there are specific different solutions. The differentiated solutions include the issue of the timeliness of data and information contained in the created databases of digitized data, data warehouses, Big Data databases, etc., which contain specific data sets acquired from the Internet from various online knowledge bases, publication indexing databases, online libraries of publications, information portals, social media, etc., acquired at different times, data sets having different information characteristics, etc.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
How to build a Big Data Analytics system that would provide a database and up-to-date information for an intelligent chatbot made available on the Internet?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
To build such a system, there must be the need to integrate different online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media platforms, and more. By doing so, we can create a comprehensive database that provides up-to-date information on any given topic.
The first step in building this system is to identify and gather relevant sources of information. This includes partnering with online databases and libraries to gain access to their vast collection of resources. Additionally, collaborating with scientific knowledge indexing databases will ensure that the latest research findings are included in our database.
Next, we need to develop algorithms that can efficiently retrieve data from these sources in real-time. These algorithms should be able to filter out irrelevant information and present only the most accurate and reliable data to users.
Once we have gathered and organized the data, it is time to create an intelligent chatbot that can interact with users on the internet. This chatbot should be capable of understanding natural language queries and providing relevant answers based on the available data.
By making this intelligent chatbot available on the internet, users will have instant access to a wealth of up-to-date information at their fingertips. Whether they are looking for scientific research papers or general knowledge about a specific topic, this system will provide them with accurate answers quickly.
  • asked a question related to Big Data
Question
6 answers
The topic of my master's thesis is "The use of Big Data and Data Science technologies to assess the investment attractiveness of companies." I plan to design and implement a machine for market analysis, using graphs. I will be grateful to you for links to scientific articles on this topic.
Relevant answer
Answer
I don't have direct access to external databases or the internet to provide specific sources. However, I can guide you on how to find reliable sources for your master's dissertation:
  1. Academic Databases:Use databases like PubMed, IEEE Xplore, ScienceDirect, JSTOR, and Google Scholar to search for academic articles and papers related to your topic.
  2. University Library:Explore your university's library resources, both online and offline. Librarians can help you access journals, books, and other materials.
  3. Citations in Existing Literature:Examine the reference lists of relevant articles and books to discover other works related to your research. This can lead you to valuable sources.
  4. Contact Experts:Reach out to professors, researchers, or professionals in your field of study. They may suggest key publications or provide insights into recent research.
  5. Online Repositories:Check repositories such as arXiv.org, ResearchGate, and institutional repositories for preprints, theses, and open-access publications.
  6. Government Publications:Look for reports and publications from government agencies, as they often provide valuable data and research findings.
  7. Professional Organizations:Explore publications from relevant professional organizations and associations related to your field.
  8. Conferences and Proceedings:Review conference proceedings in your field, as they often contain the latest research. Websites like IEEE Conference Proceedings and ACM Digital Library are good starting points.
  9. Books:Search for books related to your topic through online bookstores, your university library, or platforms like Google Books.
  10. Theses and Dissertations:Explore the theses and dissertations database of your university or other institutions. This can provide in-depth studies related to your research.
Remember to critically evaluate each source for relevance, reliability, and academic rigor. Additionally, check the specific requirements of your institution or department for citation styles and guidelines for including sources in your dissertation.
  • asked a question related to Big Data
Question
2 answers
How does AI differ from standard biostatistics?
What is “big data”? How does AI enable big dataset analysis? Can we argue for their synchronization for effective use of these two for efficient delivery in several diverse sectors (business, science, government).
Relevant answer
Answer
  1. Approach to Data Analysis:Biostatistics: Biostatistics primarily relies on statistical methods to analyze data. It involves the application of statistical techniques to design experiments, collect data, and draw inferences from the data in biological and health-related research. AI: AI, on the other hand, encompasses a broader set of techniques that enable machines to mimic human intelligence. This includes learning from data, recognizing patterns, making decisions, and continuously improving performance. Machine learning, a subset of AI, is particularly relevant to handling large datasets.
  2. Learning and Adaptation:Biostatistics: Traditional biostatistics often involves hypothesis testing and making inferences based on pre-defined statistical models. The emphasis is on mathematical and statistical rigor. AI: AI, especially machine learning, involves learning from data without explicit programming. It can identify patterns and make predictions or decisions based on those patterns. AI systems can adapt and improve their performance over time as they are exposed to more data.
  3. Handling Big Data:Big Data: Big data refers to extremely large and complex datasets that traditional data processing methods may struggle to handle. It is characterized by the three Vs: volume, velocity, and variety. AI and Big Data: AI is well-suited for analyzing big datasets. Machine learning algorithms can efficiently process and extract meaningful insights from massive amounts of data. The ability of AI systems to learn from diverse and large datasets allows them to uncover patterns and relationships that might be challenging for traditional statistical methods.
  4. Feature Extraction and Pattern Recognition:Biostatistics: Traditional statistical methods often require the researcher to pre-specify the variables and relationships to be examined. AI: AI, particularly in machine learning, can automatically discover relevant features and patterns in the data. This ability is crucial in handling big datasets where manual identification of all relevant variables may be impractical.
  • asked a question related to Big Data
Question
4 answers
Dear All,
I appreciate your kind help in doing the survey on the role of big data in cybersecurity, which I have given below. Your answers will be a big help in my research and knowledge.
Pass it on to all who're knowledgeable about big data and cybersecurity.
Sincerely Regards
Maytha Alshamsi
Relevant answer
Answer
Dear Maytha Alshamsi,
You may want to look over some useful information presented below:
Absolutely, cybersecurity plays a crucial role in ensuring the secure transfer of massive data between different organizations. As businesses and organizations increasingly rely on the exchange of large volumes of sensitive information, the need for robust cybersecurity measures becomes paramount. Here's how cybersecurity contributes to secure data transfer:
  1. Encryption: Cybersecurity employs encryption techniques to encode data during transfer. This ensures that even if intercepted, the data remains unreadable to unauthorized parties. Secure protocols like HTTPS, SSH, and VPNs leverage encryption for safe data transmission.
  2. Access Control: Cybersecurity measures include implementing access controls to restrict data access to authorized personnel only. This helps prevent unauthorized individuals or entities from intercepting or manipulating the transferred data.
  3. Secure File Transfer Protocols: Using secure file transfer protocols, such as SFTP (Secure File Transfer Protocol) or SCP (Secure Copy Protocol), adds an extra layer of protection. These protocols use encryption and secure authentication methods for data exchange.
  4. Firewalls and Intrusion Detection Systems: Deploying firewalls and intrusion detection systems helps monitor and filter network traffic, identifying and blocking potential threats. This helps ensure that only legitimate data transfers occur.
  5. Multi-Factor Authentication (MFA): Implementing MFA adds an extra layer of security by requiring users to provide multiple forms of identification before gaining access to the data transfer systems. This mitigates the risk of unauthorized access.
  6. Security Audits and Monitoring: Regular security audits and monitoring activities help identify vulnerabilities and potential threats. By proactively addressing weaknesses, organizations can enhance the overall security of their data transfer processes.
  7. Secure APIs (Application Programming Interfaces): Many data transfers involve the use of APIs. Ensuring that APIs are designed with security in mind, including proper authentication and authorization mechanisms, helps prevent unauthorized access and data breaches.
  8. Incident Response Planning: Having a well-defined incident response plan is crucial. In the event of a security incident, organizations need a systematic approach to identify, contain, eradicate, recover, and learn from the incident.
  9. Data Loss Prevention (DLP): DLP solutions help prevent unauthorized access and transmission of sensitive data. They monitor, detect, and block the transfer of sensitive information, reducing the risk of data breaches.
  10. Employee Training and Awareness: Human factors are often a significant source of cybersecurity vulnerabilities. Training employees on secure data handling practices and creating awareness about potential threats can greatly enhance overall security.
By integrating these cybersecurity measures into their data transfer processes, organizations can significantly reduce the risk of data breaches and unauthorized access. This, in turn, enables the secure and efficient exchange of massive amounts of data between different entities.
  • asked a question related to Big Data
Question
1 answer
How should AI-assisted Big Data centers be developed so that they fit in with the Sustainable Development Goals?
How should Big Data centers aided by AI technology be developed so that they fit in with sustainability goals, so that they do not generate large amounts of electricity consumption and/or are powered by renewable and carbon-free energy sources?
Generative artificial intelligence technology, which, with the help of deep learning applied to artificial neural networks, is taught specific skills, performing activities previously performed only by humans, is finding more and more new applications in various branches of the economy, in various types of business entities. Generative artificial intelligence technology helps in solving complex tasks that require processing large sets of data in a relatively short period of time, which is already far beyond human capabilities. Therefore, more and more new tools based on generative artificial intelligence technology are being created, which are engaged in solving specific tasks, in which a number of specific criteria are required to be met in order to create a precisely specified product, project, innovative solution, finding a solution to a complex problem, and so on. This type of complex problem solving includes the creation of new solutions for green technology and eco-innovation, which can be helpful in connection with the need to accelerate and increase the efficiency of carrying out the green transformation of the economy, including the green transformation of the energy sector based on, among other things, the development of renewable and emission-free energy sources. However, paradoxically, generative artificial intelligence technology performing certain outsourced tasks i.e. based on large data sets collected in data centers, using Big Data Analytics technological solutions consumes large amounts of electricity. In a situation where these large amounts of electricity are generated by burning fossil fuels through dirty combustion energy, the aforementioned new technological solutions increasingly categorized as Industry 5.0 are unfortunately not described as green, pro-climate, pro-environment, pro-environment, pro-environment, sustainable, pursuing sustainable development goals, etc. Accordingly, Big Data centers assisted by artificial intelligence technology should be developed to fit in with sustainability goals, not to generate high electricity consumption and/or to be powered by renewable and carbon-free energy sources. The aforementioned Big Data centers assisted by artificial intelligence technology should therefore be designed and built in such a way that power plants generating energy from renewable sources are also built next to them or above them if they are built underground, such as wind farms and/or photovoltaic panel installations or other power plants generating energy by other means but emission-free. In the future, these may also include a new generation of nuclear power plants generating energy from currently generated spent fuel waste from currently operating nuclear power plants operating on the basis of widespread traditional nuclear technologies. Besides, in the future, another solution for emission-free clean energy may be the use of a new generation of nuclear power based on cold fusion. In addition to the above, the technologies categorized as energy futures also include energy based on green hydrogen and new types of energy resources, which may be extracted from space. An effective combination of the above-mentioned technologies, i.e. green energy technologies and ICT and Industry 4.0/5.0 information technologies, may lead to the creation of AI-assisted Big Data green data centers.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should AI-assisted Big Data centers be developed so that they fit in with the Sustainable Development Goals, so that they do not generate a lot of electricity consumption and/or are powered by renewable and carbon-free energy sources?
How should AI-assisted Big Data centers be developed so that they fit in with sustainability goals?
And what is your opinion on this topic?
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Dariusz Prokopowicz
Relevant answer
Answer
Our focus is on exploring and developing new technological solutions that bypass the need for neural networks and machine learning to reduce energy consumption. In our KnoDL technology, this principle guides our design and execution. Recent testing outcomes demonstrate that our system is capable of processing 12 billion NRD records in just 28 minutes using a standard consumer laptop, exemplifying a promising direction for sustainable tech development.
  • asked a question related to Big Data
Question
3 answers
Hi,
I need help processing for big data. I am trying figure out ''How does Twitter process big data? Also, what can be done with big data?'' Can you explain briefly and/or can you suggest references for this subject?
Thanks..
Best regards...
Relevant answer
Answer
In the context of synthesizing an EGFET (Electrolyte-Gated Field-Effect Transistor), the choice of silicon type (n-type or p-type) for the substrate depends on the specific requirements of your device and the materials you are using.
Here are some considerations:
  1. Matching Types for Optimal Performance: For optimal performance, it is generally beneficial to match the type of silicon substrate with the type of material you are depositing. For example, if you are using an n-type semiconductor material (e.g., n-type organic semiconductor), it is typically best to use an n-type silicon substrate. Similarly, if you are using a p-type semiconductor material, it is often best to use a p-type silicon substrate.
  2. Depletion Region and Threshold Voltage: The type of silicon substrate can influence the formation of the depletion region in the transistor, which affects the threshold voltage and overall device behavior. Matching the substrate type with the deposited material can help in achieving the desired electrical characteristics.
  3. Polarity and Carrier Mobility: The carrier mobility of electrons and holes can vary depending on the type of semiconductor material. Matching the substrate type with the deposited material can lead to better carrier transport properties.
  4. Potential Incompatibilities: In some cases, using mismatched silicon and semiconductor types could lead to unwanted effects such as increased leakage currents or less predictable device behavior.
  5. Device Design and Application: The specific requirements of your device and its intended application may influence the choice of materials. Consider factors like voltage requirements, sensitivity, and desired performance characteristics.
Ultimately, the choice between n-type and p-type silicon substrates should be made based on a thorough understanding of the materials involved, the device's specifications, and any specific goals you have for your EGFET.
Keep in mind that this is a general guideline, and there may be specialized cases or experimental setups where using mismatched types could be intentionally chosen for specific reasons. It's always a good idea to consult with experts in the field or refer to specific literature related to your research to make an informed decision.
How does Twitter process big data? Also, what can be done with big data?'' Please, explain it briefly, and/or can you suggest references for this subject?
Twitter processes big data using a combination of distributed computing, real-time processing, and various data storage technologies. Here's a brief overview:
  1. Data Collection: Twitter continuously collects a massive volume of data in real-time from user interactions, tweets, retweets, likes, follows, etc. This data is often unstructured and includes text, images, videos, and more.
  2. Data Ingestion: The collected data is ingested into a distributed computing framework like Apache Hadoop or Apache Spark. These frameworks allow for the parallel processing of large datasets across a cluster of machines.
  3. Data Storage: Twitter uses distributed storage systems, such as Hadoop Distributed File System (HDFS) or cloud-based storage solutions like Amazon S3, to store the vast amounts of data.
  4. Data Processing: Data is processed in batches or streams. Batch processing involves analyzing data in large chunks at scheduled intervals, while stream processing handles data in real-time as it's generated.
  5. Data Cleaning and Transformation: The data may undergo cleaning and transformation to remove noise, handle missing values, and prepare it for analysis.
  6. Analysis and Machine Learning: Various analytical techniques and machine learning algorithms are applied to extract insights, detect patterns, and make predictions from the data. This could involve sentiment analysis, trend detection, recommendation systems, and more.
  7. Visualization and Reporting: The results of the analysis are often visualized using tools like Tableau, Power BI, or custom-built visualization dashboards. This helps in presenting the findings in a user-friendly format.
  8. Data Storage for Retrieval: Processed data and results may be stored in databases, data warehouses, or other storage solutions for easy retrieval and further analysis.
  9. Feedback Loop: Twitter may use the insights gained from big data analysis to improve user experience, optimize content delivery, and refine algorithms for features like recommendation systems or content ranking.
As for what can be done with big data, the possibilities are extensive:
  1. Business Intelligence and Analytics: Big data allows businesses to gain insights into customer behavior, market trends, and operational efficiency, which can inform decision-making.
  2. Personalized Experiences: Companies can use big data to tailor products, services, and content to individual customer preferences.
  3. Healthcare and Life Sciences: Big data is used for medical research, drug development, patient monitoring, and personalized medicine.
  4. Predictive Maintenance: Industries like manufacturing and utilities use big data to predict when equipment is likely to fail, allowing for proactive maintenance.
  5. Smart Cities and IoT: Big data is used to optimize urban planning, traffic management, energy usage, and more in smart city initiatives.
  6. Security and Fraud Detection: Big data analytics can identify patterns indicative of cybersecurity threats or fraudulent activities.
  7. Scientific Research: Big data is crucial in fields like genomics, climate modeling, astronomy, and many others.
For further reading, you can refer to books like:
  1. "Big Data: A Revolution That Will Transform How We Live, Work, and Think" by Viktor Mayer-Schönberger and Kenneth Cukier.
  2. "Hadoop: The Definitive Guide" by Tom White.
  3. "Streaming Systems: The What, Where, When, and How of Large-Scale Data Processing" by Tyler Akidau, Slava Chernyak, and Reuven Lax.
  4. "Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking" by Foster Provost and Tom Fawcett.
  • asked a question related to Big Data
Question
5 answers
With the advent of new technologies (e.g., AI, big data), according to reports, the shortage of technological talents may affect the operations of organizations. So how should the human resources department improve the retention rate of existing talents?
I believe that effective motivational strategies should be adopted, such as employee experiences such as flexible working hours or remote working.
I would like to ask for your opinion on this aspect, thank you very much!
Relevant answer
Answer
I feel like scientists don't have the same remote opportunities as other workers do.
So I made this job board: https://www.sciremote.com/
Because scientists should be able to work from anywhere too:
  • asked a question related to Big Data
Question
1 answer
1. Malware execution and analysis on IOT and GPU-based processer devices?
in that IOT and GPU both have different processors in that case how same or different kind of malware will impacts, how to do forensics investigation to club both the technologies ?
2.Malware execution and analysis on IOT and GPU-based processer devices?
in that IOT and GPU both have different processors in that case how same or different kind of malware will impacts in bid data environment
Relevant answer
Answer
The aim of this question is to define global features to tend to make analysis partially comparable between any type of processor. In my opinion, it is possible to do this by focusing on programing language and program structure (AST, Graph Analysis...). However, that's could lead to decrease in case of your goal is to produce categorization by any means (ML, DL...). I focus my work on program architecture especially on central node detection. It is hard to produce a malware detection system with good accuracy, but it is too interesting.
  • asked a question related to Big Data
Question
4 answers
Dear Researchers, I am looking for open-source Gravity/Magnetic data for interpretations via Oasis montaj Software and Voxi Earth Modeling. Please specify some sources where form the data is easily accessible.
Regards,
Ayaz
Relevant answer
Answer
Check the NGU (Geological Survey of Norway) website.
You can download most of our magnetic surveys for free.
  • asked a question related to Big Data
Question
4 answers
Hello everyone. I have question about obtaining data from Internet.
In my research I will analyze comments from websites and social media platforms. And I am searching for applications/apps/technologies other tools to download comments from Internet to my computer.
Do you know any tools/apps to download comments for free?
There is around 10.000 comments and if I would copy/paste one by one it would take me a lot of time. I want to obtain data quickly.
Do you have any suggestions for me?
Thank you so much for help.
Regards, Nejc
Relevant answer
Answer
Publish your paper for free
_____________________
Dear Researchers and postgraduate students
MESOPOTAMIAN JOURNAL OF BIG DATA (MJBD) issued by Mesopotamian Academic Press, welcomes the original research articles, short papers, long papers, review papers for the publication in the next issue the journal doesn’t requires any publication fee or article processing charge and all papers are published for free
Journal info.
1 -Publication fee: free
2- Frequency: 1 issues per year
3- Subject: computer science, Big data, Parallel Processing, Parallel Computing and any related fields
4- ISSN: 2958-6453
5- Published by: Mesopotamian Academic Press.
6- Contact: email: [email protected]
Managing Editor: Dr. Ahmed Ali
The journal indexed in
1- Croosref
2- DOAJ
3- Google scholar
4- Research gate
  • asked a question related to Big Data
Question
2 answers
Can the applicability of Big Data Analytics backed by artificial intelligence technology in the field be significantly enhanced when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and executed by the most powerful quantum computers?
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers, which also apply Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics technologies?
Can the scale of data processing carried out by the most powerful quantum computers be comparable to the processing that takes place in the billions of neurons of the human brain?
In recent years, the digitization of data and archived documents, the digitization of data transfer processes, etc., has been progressing rapidly.
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Accordingly, developed economies in which information and computer technologies are developing rapidly and finding numerous applications in various economic sectors are called information economies. The societies operating in these economies are referred to as information societies. Increasingly, in discussions of this issue, there is a statement that another technological revolution is currently taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies classified as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence, including generative artificial intelligence with artificial neural network technology also applied and subjected to deep learning processes. As a result, the computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are gradually increasing. There is a rapid increase in the processing of ever larger sets of data and information. The number of companies, enterprises, public, financial and scientific institutions that create large data sets, massive databases of data and information generated in the course of a specific entity's activities and obtained from the Internet and processed in the course of conducting specific research and analytical processes is growing. In view of the above, the opportunities for the application of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted, are also growing rapidly. By using the combined technologies of Big Data Analytics, other technologies of Industry 4.0/5.0, including artificial intelligence and quantum computers in the processing of large data sets, the analytical capabilities of data processing and thus also conducting analysis and scientific research can be significantly increased.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and implemented by the most powerful quantum computers?
Can the applicability of Big Data Analytics supported by artificial intelligence technology in the field significantly increase when the aforementioned technologies are applied to the processing of large data sets extracted from the Internet and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets obtained from the Internet and realized by the most powerful quantum computers?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
The convergence of Big Data Analytics and AI already offers transformative capabilities in analyzing and deriving insights from massive datasets. When you introduce quantum computing into this mix, the potential computational power and speed increase exponentially. Quantum computers, by their very nature, can process vast amounts of data simultaneously, making them ideally suited for complex tasks such as optimization problems, simulations, and certain types of data analysis that classical computers struggle with.
In the context of scientific research, the combination of these technologies can indeed significantly enhance the efficiency and depth of analysis. For instance:
Speed and Efficiency: Quantum computers can potentially solve problems in seconds that would take classical computers millennia. This speed can drastically reduce the time required for data processing and analysis, especially in fields like genomics, climate modeling, and financial modeling.
Complex Simulations: Quantum computers can simulate complex systems more efficiently. This capability can be invaluable in fields like drug discovery, where simulating molecular interactions is crucial.
Optimization Problems: Many research tasks involve finding the best solution among a vast number of possibilities. Quantum computers, combined with AI algorithms, can optimize these solutions more effectively.
Deep Learning: Training deep learning models, especially on vast datasets, is computationally intensive. Quantum-enhanced machine learning can potentially train these models faster and more accurately.
Data Security: Quantum computers also bring advancements in cryptography, ensuring that the massive datasets being analyzed remain secure.
In conclusion, while the practical realization of powerful quantum computers is still an ongoing endeavor, their potential integration with Big Data Analytics and AI promises to usher in a new era of scientific research and analysis, marked by unprecedented speed, accuracy, and depth.
  • asked a question related to Big Data
Question
39 answers
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources taken from online scientific knowledge bases, online scientific portals and online indexing databases of scientific publications?
I'm curious to know what you think about this? This kind of solution based on an intelligent publication search system and an intelligent content analysis system of retrieved publications on an online scientific portal could be of great help to researchers and scientists. In my opinion, the creation of a new generation of something similar to ChatGPT, which will use databases built solely on the basis of online scientific knowledge bases, online scientific portals and online scientific publication indexing databases makes sense if basic issues of copyright respect are met and such tools use continuously updated and objectively and scientifically verified knowledge, data and information resources. With such a solution, researchers and scientists conducting research on a specific topic would have the opportunity to review the literature within the millions of scientific publications collected on specific online scientific portals and scientific publication indexing databases. Besides, what is particularly important, the mentioned partially automated literature review would probably be realized in a relatively short time. Thus, an intelligent system for searching and analyzing the content of scientific publications would, in a short period of time, from among the millions of texts archived in specific scientific publication indexing databases, select those publications in which other researchers and scientists have described analogous, similar, related, correlated, related, etc. issues, results of scientific research conducted, selected publications within the same scientific discipline, the same topic or in the interdisciplinary field. Besides, an intelligent system for searching and analyzing the content of scientific publications could also categorize the retrieved publications into those in which other researchers and scientists confirmed analogous conclusions of conducted similar research, polemicized with the results of other researchers' research on a specific topic, obtained other results from conducted research, suggested other practical applications of obtained research results realized on the same or similar topic, etc. However, for ethical reasons and properly conducted research, i.e., respecting the research results of other researchers and scientists, it would be unacceptable for this kind of intelligent system for searching and analyzing the content of many publications available on specific databases for indexing scientific publications to enable plagiarism, i.e., to provide research results, provide retrieved content on specific issues and topics, etc., without accurately providing the source of the data, description of the source data, names of the authors of the publications, etc., and some unreliable researchers would take advantage of this opportunity. This kind of intelligent system for searching and analyzing the content of scientific publications should give for all searched publications full bibliographic descriptions, source descriptions, footnotes containing all the data that are necessary to develop full source footnotes for possible citation of specific studies, research results, theses, data, etc. contained in other publications written by other researchers and scientists. So, building this kind of intelligent tool would make sense if ChatGPT-type tools were properly improved and the system of laws for their use appropriately supplemented so that the use of ChatGPT-type tools does not violate copyrights and that these tools are used in accordance with ethics and do not generate misinformation. Improving these tools so that they do not generate disinformation, do not create "fictitious facts" in the form of descriptions, essays, photos, videos, etc. containing nicely described, presented never and nowhere seemingly facts is to keep Big Data systems updated, update data sets and information, based on which they create answers to questions, create descriptions, photos, companies and so on. This is important because current online tools like ChatGPT often create "nicely described fictitious facts," which is used to generate fake news and misinformation in online social media. When all that I have written above would be corrected and the use completed, and not only in some parts of the world but on a global scale, then the creation of a new generation of something similar to ChatGPT, which will use databases built solely on the basis of online scientific knowledge bases, online scientific portals and online indexing databases of scientific publications would make sense and could prove helpful to people, including researchers and scientists. Besides, the current online ChatGPT-type tools are not perfect, as they draw data not directly in real-time online from specific databases and knowledge contained in selected websites and portals, but draw information, knowledge, data from an offline database created some time ago. For example, currently the most popular ChatGPT still relies on a database of data, information, etc. contained in many publication texts downloaded from selected websites and web portals but not today or yesterday downloaded only in 2021! So these are data and information already outdated on many issues. Hence the absurdities, inconsistencies with the facts, creation of "fictitious facts" by ChatGPT in a significant part of the answers generated by this system to questions asked by Internet users. In view of the above, in a number of issues, both technological, organizational, formal, normative, etc., such intelligent systems should be improved so that they can be used in open access in the applications I wrote about above.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, does it make sense to create a new generation of something similar to ChatGPT, which will use databases built solely on the basis of continuously updated data, information, objectively verified knowledge resources taken from online scientific knowledge bases, online scientific portals and online indexing databases of scientific publications?
What do you think about creating a new generation of something similar to ChatGPT, which will use exclusively online scientific knowledge resources?
And what is your opinion about it?
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Counting on your opinions, on getting to know your personal opinion, on a fair approach to the discussion of scientific issues, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Our team has been working on retraining the ChatGPT 4.0 API and to develop an academic research assistant tool to boost productivity in exactly the areas you discuss. There are two aspects of the process: a) customize or fine-tine the generic bot to remove its tendency to confabulate and to assume a scientific mindset (critical thinking, objectivity, ethics etc.), and b) train the model on your specific academic literature to become an SME in your domain. Re-training the bot is a rather arduous process as you have to provide at least 50-100 examples of the desired behavior in each specific area, and it also comes with considerable cost. However, we believe that it will pay off thanks to the productivity gains after the bot is deployed. We hope to have a working prototype soon and will share with a group of researchers interested in beta-testing. We will share updates on the project here: https://www.researchgate.net/post/Intelligent_Research_Assistant_Software_Development_Project-Athena
  • asked a question related to Big Data
Question
3 answers
What is the use of big data artificial intelligence in Indian agriculture in achieving sustainable development goals?
Relevant answer
Answer
In agriculture, big data analytics can optimize crop production, minimize water usage, and mitigate the environmental impact of farming practices. In urban planning, AI-powered models can optimize transportation systems, reduce congestion, and improve energy efficiency. By enabling smart grids that partially match energy demand to periods when the sun is shining, and the wind is blowing, AI can also help to integrate variable renewable. Advanced AI technologies, research, and product creation may necessitate vast computer resources only available in substantial computing centres. Through forecasting and predictive analytics, farmers can minimize the risk of crop failures. As, lntello Labs is a startup company in India that uses artificial intelligence (AI) to help farmers analyze the quality of their produce and reduce food loss. The role of artificial intelligence in achieving the Sustainable Development Goals. Notably, AI enables new technologies that improve efficiency and productivity, but it may also lead to increased inequalities among and within countries, thus hindering the achievement of the 2030 Agenda. AI can be used, for instance, to manage environmental consequences and changes in a variety of economic contexts and settings. These are just a few examples of potential uses, which also include safer supply chains, environmental control and regulation, and weather forecasting. AI has the capacity to improve production, increase efficiency and improve safety along production lines. AI-powered robots can continuously work around the clock; this ensures a continuous supply of output. AI algorithms can be used to analyze soil data and weather patterns, enabling farmers to make more informed decisions about irrigation, fertilizer use, and other farming practices that can impact soil health and water quality. AI can also play a role in reducing waste and improving recycling systems.
  • asked a question related to Big Data
Question
6 answers
Already, can the application of a combination of artificial intelligence technology, Big Data Analytics and quantum computers assist in the strategic management of an enterprise?
Already today, can the application of a combination of artificial intelligence technology, Big Data Analytics and quantum computers assist in the field of carrying out multi-faceted, complex strategic analysis of the business environment and determinants of company development, predictive analysis based on the processing of large data sets and, therefore, also in the field of strategic business management?
The ongoing technological progress is characterized by the dynamic development of Industry 4.0/5.0 technologies, technologies typical of the current fourth technological revolution, including ICT information and communication technologies, technologies for advanced multi-criteria processing of large data sets and information resources. The development of information processing technologies in the era of the current technological revolution termed Industry 4.0/5.0 is determined by the development and growth of applications of ICT information and communication technologies, Internet technologies and advanced data processing, which include. : Big Data Analytics, Data Science, cloud computing, artificial intelligence, machine learning, deep learning, personal and industrial Internet of Things, Business Intelligence, autonomous robots, horizontal and vertical data system integration, multi-criteria simulation models, digital twins, additive manufacturing, Blockchain, smart technologies, cybersecurity instruments, Virtual and Augmented Reality, and other advanced data processing technologies Data Mining. Technological advances in computing, emerging faster and faster microprocessors, more and more capacious and high-speed data storage disks, etc., are making it possible to process large data sets faster and more efficiently. In addition, numerous new applications of the aforementioned technologies are emerging in various sectors of the economy by combining these technologies in various configurations for new applications. Numerous business applications of these technologies in companies and enterprises are emerging. The implementation of these technologies into the business activities of companies, enterprises, financial and public institutions contributes to increasing the efficiency of the implementation of certain processes. In view of the above, therefore, there is much to suggest that if not yet now, then soon the application of a combination of artificial intelligence technologies, Big Data Analytics and quantum computers may be helpful in terms of carrying out multi-faceted, complex strategic analyses of the business environment and determinants of company development, predictive analyses based on the processing of large data sets and, therefore, also in terms of strategic business management.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the application of the combination of artificial intelligence technology, Big Data Analytics and quantum computers already be helpful in the field of strategic business management?
Can the use of a combination of artificial intelligence technology, Big Data Analytics and quantum computers assist in strategic business management?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Yes, the combination of artificial intelligence (AI) technology, Big Data Analytics, and quantum computers has the potential to significantly support and enhance strategic business management in various ways. Here are some key aspects where these technologies can have a positive impact:
  1. Data Processing and Analysis:Big Data Analytics, powered by AI, can handle large volumes of data to identify trends, patterns, and correlations that might be difficult to uncover through traditional methods. This can help in making data-driven strategic decisions.
  2. Predictive Analytics:AI, particularly machine learning, can utilize historical data to make predictions and forecasts. This is valuable for strategic planning, such as predicting market trends, demand fluctuations, and supply chain optimization.
  3. Enhanced Decision-Making:AI algorithms can process and analyze data faster and more accurately than humans, helping in better decision-making. Quantum computing, in particular, has the potential to solve complex optimization problems and simulations that are crucial for strategic decision support.
  4. Personalization and Customer Insights:AI can analyze customer data and provide insights into consumer behavior, preferences, and sentiment. This information can be used to create more personalized marketing and product strategies.
  5. Risk Management:AI and Big Data can be used to analyze various risk factors, such as financial, operational, and market risks, helping businesses to proactively manage and mitigate potential threats.
  6. Supply Chain Optimization:Quantum computing can be used to optimize complex supply chain networks, finding the most efficient routes, minimizing costs, and reducing waste. This can result in significant cost savings.
  7. Fraud Detection:AI and Big Data can help in real-time fraud detection by identifying unusual patterns and anomalies in transactions, thereby protecting businesses from financial losses.
  8. Resource Allocation:AI and quantum computing can help businesses allocate resources more efficiently, whether it's human resources, capital investments, or marketing budgets, by optimizing and simulating various scenarios.
  9. Competitive Advantage:Leveraging advanced technologies such as quantum computing can provide a competitive edge by solving problems faster and more effectively than competitors.
  10. Innovation and Research:
  • Quantum computing can accelerate research and development in various industries, leading to the discovery of new materials, pharmaceuticals, and technological advancements, which can have a significant impact on business strategies.
However, it's important to note that the practical application of quantum computing in business is still in its infancy, and the technology is not yet widely accessible. Additionally, there are challenges and considerations, including data privacy and ethical concerns, that need to be addressed when implementing AI and Big Data Analytics in business strategies. As these technologies continue to develop, their role in strategic business management is likely to expand and evolve.
  • asked a question related to Big Data
Question
1 answer
Como se maneja la big data en el proceso logistico en una organizacion, cuales son los avances de los ultimos años
Relevant answer
Answer
If you ask chatGPT "How is big data managed in the logistics process within an organization, and what are the recent advances?" a reasonable answer comes up:
------
Managing big data in the logistics process within an organization has become increasingly critical in recent years. Here are some of the recent advances in this field:
1. **Real-time Data Analysis:** Logistics companies are now using big data analytics to process and analyze data in real-time. This allows them to make decisions quickly, optimize routes, and respond to changing conditions on the fly.
2. **Predictive Analytics:** Advanced algorithms and machine learning models are being used to predict future logistics trends, demand patterns, and potential disruptions. This helps in proactive decision-making and resource allocation.
3. **IoT Integration:** Internet of Things (IoT) devices, such as GPS trackers and sensors, are extensively used to collect data from vehicles, products, and facilities. This data is then integrated into big data systems to provide a holistic view of the supply chain.
4. **Blockchain Technology:** Blockchain is being explored for enhancing transparency and security in logistics. It helps in tracking goods through the supply chain, reducing fraud, and ensuring data integrity.
5. **Automation and Robotics:** Automation and robotics are increasingly used in warehouses and distribution centers. These technologies generate large amounts of data that can be harnessed for optimization and efficiency.
6. **Cloud Computing:** The adoption of cloud-based solutions has made it easier for organizations to store and process vast amounts of data. This scalability is essential for handling big data effectively.
7. **Data Visualization Tools:** User-friendly data visualization tools make it easier for logistics professionals to interpret and gain insights from big data, facilitating better decision-making.
8. **Environmental Considerations:** Logistics companies are using big data to optimize routes and reduce the environmental footprint by minimizing fuel consumption and emissions.
9. **Supplier and Inventory Management:** Big data analytics help in improving supplier relationships and optimizing inventory levels, reducing costs, and ensuring products are available when needed.
10. **Customization and Personalization:** Big data allows logistics companies to tailor services and products to individual customer preferences, providing a more personalized and efficient experience.
In summary, the management of big data in the logistics process has seen significant advancements in recent years, contributing to more efficient and responsive supply chains, cost reductions, and improved customer experiences. The ability to collect, process, and analyze data has become a crucial competitive advantage in the logistics industry.
----
I would add that the biggest problems that companies face is attempting to exploit recent advances is lack of software development skills. The reason is as follows:
1. To exploit new technology the skill must be there to create a test and verify the benefit of the advancement.
2. To test a technology that requires big data as input, big data administration skills are a prerequisite.
3. As these skills are novel (big data is only 10 years old) organisations don't hire enough SW engineers to enable or managing experiments at this scale.
So what are these skills? Again ChatGPT provides a reasonable answer:
---
Managing big data effectively requires a combination of technical, analytical, and business skills. Here are some of the key skills required to manage big data:
1. **Data Analysis:** Proficiency in data analysis is crucial. You should be able to extract meaningful insights from large datasets. Familiarity with tools like Python, R, and data analysis libraries is essential.
2. **Data Management:** Understanding data storage and management systems is vital. This includes knowledge of relational databases, NoSQL databases, and data warehousing solutions.
3. **Programming Skills:** Proficiency in programming languages, particularly Python and Java, is important for data manipulation, transformation, and analysis. Knowledge of SQL for database querying is also valuable.
4. **Big Data Technologies:** Familiarity with big data technologies such as Hadoop, Spark, and related ecosystems is essential. These tools enable the processing and analysis of large datasets.
5. **Machine Learning:** Basic understanding of machine learning is beneficial, as it can be applied to extract predictive insights from big data. Knowing how to use machine learning libraries and frameworks is an advantage.
6. **Data Visualization:** The ability to create meaningful visualizations using tools like Tableau, Power BI, or open-source options is crucial for conveying insights to non-technical stakeholders.
7. **Statistics:** A good grasp of statistical methods is essential for data analysis and interpretation. Knowledge of descriptive statistics, inferential statistics, and hypothesis testing is valuable.
8. **Business Acumen:** Understanding the business context is critical. Being able to translate data insights into actionable strategies and decisions that benefit the organization is a key skill.
9. **Data Security and Compliance:** Knowledge of data security best practices and compliance regulations is important, especially when handling sensitive or regulated data.
10. **Problem-Solving:** Strong problem-solving skills are required to address data-related challenges, such as data quality issues, scalability, and performance optimization.
11. **Communication:** Effective communication is crucial for conveying complex technical findings to non-technical stakeholders and decision-makers.
12. **Project Management:** Being able to manage big data projects, including setting objectives, timelines, and resources, is important for successful data management.
13. **Data Engineering:** Understanding data engineering concepts, including data pipelines, data cleansing, and ETL (Extract, Transform, Load) processes, is essential for data preparation and transformation.
14. **Cloud Computing:** Familiarity with cloud platforms like AWS, Azure, or Google Cloud is valuable, as many big data solutions are hosted in the cloud.
15. **Continuous Learning:** Given the rapidly evolving nature of data technologies, a willingness to continually learn and adapt to new tools and techniques is essential.
In summary, managing big data requires a combination of technical, analytical, and business skills. As the field continues to evolve, staying updated with the latest developments and tools is essential to be an effective big data manager.
---
Whilst chatGPT lists all the general issues, it is important to recognize that reducing the number of technologies will significantly reduce the cognitive burden of the big data team. Unfortunately, and due to the rapid development of technologies navigating all the possible technologies leads to a jungle of options, where selection of tools for "big data" projects becomes critical. Statistics by Alex Yu (ByteByteGo) reveal that more than 50% of big data projects fail to meet expectations due to wrong choice of technology (e.g. poor tech stack choice) and hence require some form of re-engineering that causes project overrun.
In summary, the lack of skills and previous experiences leaves big data projects to the largest corporations which can afford the learning process.
Finally I recommend you to ask yourself: What makes big data "big"?
A sufficiently technical answer to that question will leave you informed about what is required to manage "big data" and subsequently "how to manage" to prerequisites.
  • asked a question related to Big Data
Question
2 answers
Dear Colleagues,
Title of Research Project: Exploring Reflective Learning Strategies in Big Data Analytics Education and Practice: A Cross-sectional Study on Adoption, Effectiveness, and Influencing Factors
I will appreciate your participation in this research study, which has been reviewed and approved by The Salomons Ethics Panel, Salomons Centre for Applied Psychology, Canterbury Christ Church University, UK. This questionnaire will take about 10-12 minutes to complete.
Here is the link to the questionnaire including the informed consent and participant information sheet: https://forms.gle/kFycqu7KXqKkzV8F9
Thanks, Rossi A. Hassad, PhD, MPH, CStat, PStat
Canterbury Christ Church University, UK
Relevant answer
Answer
Dear Radhiya,
Thanks very much for this information.
Here is the link to the questionnaire including the informed consent and participant information sheet: https://forms.gle/kFycqu7KXqKkzV8F9
Regards,
Rossi Hassad, PhD, MPH
  • asked a question related to Big Data
Question
1 answer
How Big Data analytics has helped to reconstruct the history of our earth
Relevant answer
Answer
I don't know that big data analytics has helped to reconstuct Terra's history. It may have some potential. If most members of the earth history community are closed minded, alternative models will be met with disdain. Hopefully, they're an open minded lot. (They do work with hard evidence, rather than faith-based mathematics, so there is a good chance they're more flexible. Faith-based theorists defend their beliefs much more strongly.)
  • asked a question related to Big Data
Question
3 answers
Can the conduct of analysis and scientific research be significantly improved through the use of Big Data Analytics, artificial intelligence and quantum computers?
Can the possibilities of Big Data Analytics applications supported by artificial intelligence technology in the field increase significantly when the aforementioned technologies are applied to the processing of large data sets obtained from the Internet and realized by the most powerful quantum computers?
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers, which also apply Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics technologies?
Can the scale of data processing carried out by the most powerful quantum computers be comparable to the data processing that is carried out in the billions of neurons of the human brain?
In recent years, the digitization of data and archived documents, digitization of data transfer processes, etc., has been progressing rapidly.
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Accordingly, developed economies in which information and computer technologies are developing rapidly and finding numerous applications in various economic sectors are called information economies. The societies operating in these economies are referred to as information societies. Increasingly, in discussions of this issue, there is a statement that another technological revolution is currently taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies classified as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence, including generative artificial intelligence with artificial neural network technology also applied and subjected to deep learning processes. As a result, the computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are gradually increasing. There is a rapid increase in the processing of ever larger sets of data and information. The number of companies, enterprises, public, financial and scientific institutions that create large data sets, massive databases of data and information generated in the course of a specific entity's activities and obtained from the Internet and processed in the course of conducting specific research and analytical processes is growing. In view of the above, the opportunities for the application of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted, are also growing rapidly. By using the combined technologies of Big Data Analytics, other technologies of Industry 4.0/5.0, including artificial intelligence and quantum computers in the processing of large data sets, the analytical capabilities of data processing and thus also conducting analysis and scientific research can be significantly increased.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and implemented by the most powerful quantum computers?
Can the applicability of Big Data Analytics supported by artificial intelligence technology in the field significantly increase when the aforementioned technologies are applied to the processing of large data sets obtained from the Internet and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers?
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
All three areas are in the development stage and they are helping in every research area. But the development of quantum computers would solve all the problems because the universe follows quantum physics, not classical physics.
  • asked a question related to Big Data
Question
3 answers
Can the application of artificial intelligence and Big Data Analytics technologies help improve system energy security management processes and enhance this security?
Probably yes if the issue of new green technologies, the development of emission-free clean energy is a priority in the energy policy shaped by the government. Efficient application of artificial intelligence and Big Data Analytics technologies can help improve system energy security management processes and increase this security. However, it is crucial to effectively combine the functionality of artificial intelligence and Big Data Analytics technologies and efficiently apply these technologies to manage the risk of energy emergencies, analyze the determinants shaping the development of energy and energy production, analyze the factors shaping the level of energy security, and forecast future energy production in the context of forecasting changes in the level of energy demand, energy production from specific types of energy sources and the possibility of energy production from specific types of energy sources determined by specific determinants.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the application of artificial intelligence and Big Data Analytics technologies help improve the processes of systemic energy security management and enhance this security?
Can artificial intelligence and Big Data Analytics help improve systemic energy security management processes?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Big data can be used to improve the energy system and ensure energy security by securing operations through load forecasting, fault detection and diagnosis, and voltage dip estimations.
Regards,
Shafagat
  • asked a question related to Big Data
Question
6 answers
Exploring the role of AI and data analytics in improving our ability to predict and manage pandemics
Relevant answer
Answer
Enhancing Pandemic Forecasting and Response Through AI and Big Data Analytics
In the contemporary technological epoch, the synergistic confluence of Artificial Intelligence (AI) and Big Data Analytics (BDA) offers prodigious potentialities in the domain of epidemiology, specifically in the prognostication and management of pandemics. Below is an elucidative exegesis delineating the role of AI and BDA in this imperative juncture of public health.
  1. Temporal and Spatial Epidemiological Trend Detection:Heterogeneous Data Integration: AI methodologies, particularly deep learning architectures like convolutional neural networks (CNNs), can seamlessly amalgamate variegated data streams, ranging from climatic datasets to population mobility patterns. This facilitates the discernment of latent epidemiological trajectories. Geospatial Analytics: Leveraging geospatial big data, AI models can undertake spatial clustering, hotspots detection, and generate spatial epidemiological landscapes, thereby optimizing surveillance operations.
  2. Genomic Epidemiology and Phylogenetics:Pathogen Genomic Sequence Analysis: Deep learning frameworks, coupled with recurrent neural networks (RNNs) and long short-term memory (LSTM) units, can decode nucleotide sequences, enabling real-time tracking of pathogenic mutations and the subsequent epidemiological repercussions. Phylodynamic Modeling: The integration of phylogenetic trees with epidemiological data enhances pathogen transmission chain detection, assisting in the early intercession of superspreading events.
  3. Predictive Analytics and Forecasting:Epidemic Trajectory Forecasting: Leveraging techniques such as time series analysis, Gaussian processes, and Bayesian inference models, AI delineates potential epidemic trajectories, enhancing proactive pandemic management strategies. Sentinel Surveillance Augmentation: By harnessing natural language processing (NLP) and sentiment analysis on digital platforms, it's plausible to detect epidemiological anomalies and incipient outbreaks, thereby amplifying sentinel surveillance efficacy.
  4. Optimization of Resource Allocation:Reinforcement Learning for Policy Decisions: AI-driven reinforcement learning algorithms can simulate various pandemic response strategies, thereby elucidating optimal policy frameworks and resource allocations that minimize societal and economic ramifications. Supply Chain Analytics: Through BDA, the healthcare supply chain can be optimized in real-time, ensuring efficacious distribution of essential commodities like personal protective equipment (PPE) and vaccines.
  5. Socio-behavioral Analytics and Public Engagement:Sentiment Analysis on Public Discourse: By applying NLP on social media feeds and public discourse platforms, AI can gauge public sentiment, facilitating the development of targeted communication strategies and ensuring efficacious public engagement. Epidemiological Simulation Models: Agent-based modeling and cellular automata, driven by AI, can simulate various socio-behavioral scenarios, shedding light on potential transmission dynamics in diverse sociocultural milieus.
In summation, the concomitant integration of AI and BDA transcends traditional epidemiological paradigms, proffering an enhanced acumen in pandemic forecasting and response. As we embark upon the Fourth Industrial Revolution, the quintessential role of technologically-driven methodologies in public health resilience becomes incontrovertibly manifest.
  • asked a question related to Big Data
Question
2 answers
With the advent of new technologies (for example, AI, big data), the shortage of scientific and technological talents may affect the organization's operations. So how should the human resources department improve the retention rate of existing talents?
I believe that effective motivational strategies should be adopted, such as employee experiences such as flexible working hours or remote working.
I would like to ask for your opinion on this aspect, thank you very much!
Relevant answer
Answer
Thank you for sharing your thoughts!
  • asked a question related to Big Data
Question
2 answers
With the advent of new technologies (e.g., AI, big data), according to reports, the shortage of technological talents may affect the operations of organizations. So how should the human resources department improve the retention rate of existing talents?
I believe that effective motivational strategies should be adopted, such as employee experiences such as flexible working hours or remote working.
I would like to ask for your opinion on this aspect, thank you very much!
Relevant answer
Answer
Thank you for sharing! This information is very useful to me.I wish you success in your work and research!
  • asked a question related to Big Data
Question
4 answers
What are the possibilities for the applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Currently, another technological revolution is taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies categorized as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence. The computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are successively increasing. The processing of ever-larger sets of data and information is growing. Databases of data and information extracted from the Internet and processed in the course of conducting specific research and analysis processes are being created. In connection with this, the possibilities for the application of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research being conducted, are also growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of applications of Big Data Analytics supported by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted?
What are the possibilities of applications of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz
On my profile of the Research Gate portal you can find several publications on Big Data issues. I invite you to scientific cooperation in this problematic area.
Dariusz Prokopowicz
Relevant answer
Answer
In today's world AI is the hot topic
of modern digital era but that is not ensure
AI not able to replace human intelligence
  • asked a question related to Big Data
Question
6 answers
Is it possible to build a highly effective forecasting system for future financial and economic crises based on artificial intelligence technology in combination with Data Science analytics, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies?
Is it possible to build a highly effective, multi-faceted, intelligent forecasting system for future financial and economic crises based on artificial intelligence technology in combination with Data Science analytics, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies as part of a forecasting system for complex, multi-faceted economic processes in such a way as to reduce the scale of the impact of the paradox of a self-fulfilling prediction and to increase the scale of the paradox of not allowing a predicted crisis to occur due to pre-emptive anti-crisis measures applied?
What do you think about the involvement of artificial intelligence in combination with Data Science, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies for the development of sophisticated, complex predictive models for estimating current and forward-looking levels of systemic financial, economic risks, debt of the state's public finance system, systemic credit risks of commercially operating financial institutions and economic entities, forecasting trends in economic developments and predicting future financial and economic crises?
Research and development work is already underway to teach artificial intelligence to 'think', i.e. the conscious thought process realised in the human brain. The aforementioned thinking process, awareness of one's own existence, the ability to think abstractly and critically, and to separate knowledge acquired in the learning process from its processing in the abstract thinking process in the conscious thinking process are just some of the abilities attributed exclusively to humans. However, as part of technological progress and improvements in artificial intelligence technology, attempts are being made to create "thinking" computers or androids, and in the future there may be attempts to create an artificial consciousness that is a digital creation, but which functions in a similar way to human consciousness. At the same time, as part of improving artificial intelligence technology, creating its next generation, teaching artificial intelligence to perform work requiring creativity, systems are being developed to process the ever-increasing amount of data and information stored on Big Data Analytics platform servers and taken, for example, from selected websites. In this way, it may be possible in the future to create "thinking" computers, which, based on online access to the Internet and data downloaded according to the needs of the tasks performed and processing downloaded data and information in real time, will be able to develop predictive models and specific forecasts of future processes and phenomena based on developed models composed of algorithms resulting from previously applied machine learning processes. When such technological solutions become possible, the following question arises, i.e. the question of taking into account in the built intelligent, multifaceted forecasting models known for years paradoxes concerning forecasted phenomena, which are to appear only in the future and there is no 100% certainty that they will appear. Well, among the various paradoxes of this kind, two particular ones can be pointed out. One is the paradox of a self-fulfilling prophecy and the other is the paradox of not allowing a predicted crisis to occur due to pre-emptive anti-crisis measures applied. If these two paradoxes were taken into account within the framework of the intelligent, multi-faceted forecasting models being built, their effect could be correlated asymmetrically and inversely proportional. In view of the above, in the future, once artificial intelligence has been appropriately improved by teaching it to "think" and to process huge amounts of data and information in real time in a multi-criteria, creative manner, it may be possible to build a highly effective, multi-faceted, intelligent forecasting system for future financial and economic crises based on artificial intelligence technology, a system for forecasting complex, multi-faceted economic processes in such a way as to reduce the scale of the impact of the paradox of a self-fulfilling prophecy and increase the scale of the paradox of not allowing a predicted crisis to occur due to pre-emptive anti-crisis measures applied. In terms of multi-criteria processing of large data sets conducted with the involvement of artificial intelligence, Data Science, Big Data Analytics, Business Intelligence and/or other Industry 4. 0 technologies, which make it possible to effectively and increasingly automatically operate on large sets of data and information, thus increasing the possibility of developing advanced, complex forecasting models for estimating current and future levels of systemic financial and economic risks, indebtedness of the state's public finance system, systemic credit risks of commercially operating financial institutions and economic entities, forecasting economic trends and predicting future financial and economic crises.
In view of the above, I address the following questions to the esteemed community of scientists and researchers:
Is it possible to build a highly effective, multi-faceted, intelligent forecasting system for future financial and economic crises based on artificial intelligence technology in combination with Data Science, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies in a forecasting system for complex, multi-faceted economic processes in such a way as to reduce the scale of the impact of the paradox of the self-fulfilling prophecy and to increase the scale of the paradox of not allowing a forecasted crisis to occur due to pre-emptive anti-crisis measures applied?
What do you think about the involvement of artificial intelligence in combination with Data Science, Big Data Analytics, Business Intelligence and/or other Industry 4.0 technologies to develop advanced, complex predictive models for estimating current and forward-looking levels of systemic financial risks, economic risks, debt of the state's public finance system, systemic credit risks of commercially operating financial institutions and economic entities, forecasting trends in economic developments and predicting future financial and economic crises?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
In my opinion, in order to determine the question of the possibility of building a highly effective forecasting system for future financial and economic crises based on artificial intelligence technology in combination with Data Science analytics, Big Data Analytics, Business Intelligence and/or other Industry 4.0/5.0 technologies, it is first necessary to precisely define the essence of forecasting specific risk factors, i.e. factors that in the past were the sources of the occurrence of certain types of economic, financial and other crises and that may be such factors in the future. But will such a structured forecasting system based on a combination of Big Data Analytics and Artificial Intelligence be able to forecast events that appear as unusual, generating new types of risks, referred to as so-called "black swans", such as forecasting the appearance of another but generated by a difficult to predict new type of risk, an unusual event leading to the occurrence of another e.g. something similar to the 2008 global financial crisis, the 2020 pandemic, or something completely new that has not yet appeared.
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
  • asked a question related to Big Data
Question
7 answers
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning, deep learning, artificial intelligence, ... what's next? Intelligent thinking autonomous robots?
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to technologies learning machines, deep learning, artificial intelligence. Machine learning, machine learning, machine self-learning or machine learning systems are all synonymous terms relating to the field of artificial intelligence with a particular focus on algorithms that can improve themselves, improving automatically through the action of an experience factor within exposure to large data sets. Algorithms operating within the framework of machine learning build a mathematical model of data processing from sample data, called a learning set, in order to make predictions or decisions without being programmed explicitely by a human to do so. Machine learning algorithms are used in a wide variety of applications, such as spam protection, i.e. filtering internet messages for unwanted correspondence, or image recognition, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. Deep learning is a kind of subcategory of machine learning, which involves the creation of deep neural networks, i.e. networks with multiple levels of neurons. Deep learning techniques are designed to improve, among other things, automatic speech processing, image recognition and natural language processing. The structure of deep neural networks consists of multiple layers of artificial neurons. Simple neural networks can be designed manually so that a specific layer detects specific features and performs specific data processing, while learning consists of setting appropriate weights, significance levels, value system for components of specific issues defined on the basis of processing and learning from large amounts of data. In large neural networks, the deep learning process is automated and self-contained to a certain extent. In this situation, the network is not designed to detect specific features, but detects them on the basis of the processing of appropriately labelled data sets. Both such datasets and the operation of neural networks themselves should be prepared by specialists, but the features are already detected by the programme itself. Therefore, large amounts of data can be processed and the network can automatically learn higher-level feature representations, which means that they can detect complex patterns in the input data. In view of the above, deep learning systems are built on Big Data Analytics platforms built in such a way that the deep learning process is performed on a sufficiently large amount of data. Artificial intelligence, denoted by the acronym AI (artificial intelligence), is respectively the 'intelligent', multi-criteria, advanced, automated processing of complex, large amounts of data carried out in a way that alludes to certain characteristics of human intelligence exhibited by thought processes. As such, it is the intelligence exhibited by artificial devices, including certain advanced ICT and Industry 4.0 information technology systems and devices equipped with these technological solutions. The concept of artificial intelligence is contrasted with the concept of natural intelligence, i.e. that which pertains to humans. In view of the above, artificial intelligence thus has two basic meanings. On the one hand, it is a hypothetical intelligence realised through a technical rather than a natural process. On the other hand, it is the name of a technology and a research field of computer science and cognitive science that also draws on the achievements of psychology, neurology, mathematics and philosophy. In computer science and cognitive science, artificial intelligence also refers to the creation of models and programmes that simulate at least partially intelligent behaviour. Artificial intelligence is also considered in the field of philosophy, within which a theory is developed concerning the philosophy of artificial intelligence. In addition, artificial intelligence is also a subject of interest in the social sciences. The main task of research and development work on the development of artificial intelligence technology and its new applications is the construction of machines and computer programmes capable of performing selected functions analogously to those performed by the human mind functioning with the human senses, including processes that do not lend themselves to numerical algorithmisation. Such problems are sometimes referred to as AI-difficult and include such processes as decision-making in the absence of all data, analysis and synthesis of natural languages, logical reasoning also referred to as rational reasoning, automatic proof of assertions, computer logic games e.g. chess, intelligent robots, expert and diagnostic systems, among others. Artificial intelligence can be developed and improved by integrating it with the areas of machine learning, fuzzy logic, computer vision, evolutionary computing, neural networks, robotics and artificial life. Artificial intelligence (AI) technologies have been developing rapidly in recent years, which is determined by its combination with other Industry 4.0 technologies, the use of microprocessors, digital machines and computing devices characterised by their ever-increasing capacity for multi-criteria processing of ever-increasing amounts of data, and the emergence of new fields of application. Recently, the development of artificial intelligence has become a topic of discussion in various media due to the open-access, automated and AI-enabled solution ChatGPT, with which Internet users can have a kind of conversation. The solution is based and learns from a collection of large amounts of data extracted in 2021 from specific data and information resources on the Internet. The development of artificial intelligence applications is so rapid that it is ahead of the process of adapting regulations to the situation. The new applications being developed do not always generate exclusively positive impacts. These potentially negative effects include the potential for the generation of disinformation on the Internet, information crafted using artificial intelligence, not in line with the facts and disseminated on social media sites. This raises a number of questions regarding the development of artificial intelligence and its new applications, the possibilities that will arise in the future under the next generation of artificial intelligence, the possibility of teaching artificial intelligence to think, i.e. to realise artificial thought processes in a manner analogous or similar to the thought processes realised in the human mind.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
The fourth technological revolution currently taking place is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning technologies, deep learning, artificial intelligence, .... what's next? Intelligent thinking autonomous robots?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Relevant answer
Answer
The drive to build autonomous, thinking, intelligent robots, androids raises many ethical controversies and potential risks. In addition to this, the drive to build artificial consciousness as a kind of continuation of the development of artificial intelligence is also controversial.
What is your opinion on this topic?
Best regards,
Dariusz Prokopowicz
  • asked a question related to Big Data
Question
3 answers
How can artificial intelligence technologies be used effectively in universities so that the development of artificial intelligence technologies exemplified by ChatGPT does not pose a threat to universities but rather is an increase in the possibilities for the development of universities, the development of scientific research, including the improvement of the efficiency of conducted research, analytical, teaching, scientific processes using large amounts of multi-criteria data processed on computerised Big Data Analytics platforms?
The development of artificial intelligence applications today is almost limitless. Artificial intelligence technologies have been developed for many years, but it is in the last few years that this development has significantly accelerated. On the other hand, thanks to the artificial intelligence system made available on the Internet, which is the ChatGPT language model, the topic of artificial intelligence has, since the end of 2022, become one of the main topics of discussion in various fields of knowledge and in the context of different scientific disciplines, business applications, etc. ChatGPT has also become one of the most popular online platforms rapidly gaining new users at a rate comparable to the most popular and fastest growing social media sites. However, the currently developing applications of ChatGPT's intelligent language model have also started to generate negative effects and have overtaken the process of adapting systemic solutions and regulations to the situation. There has emerged a serious risk of the rapid development of disinformation in online social media, with images, videos and texts generated by various artificial intelligence solutions that present what can be described as 'fictitious facts', which present something that is difficult to distinguish from real facts, real events taking place and to diagnose who or rather what created them. There is a serious risk of non-compliance with copyright in the creation of certain types of 'works' created by artificial intelligence. This also raises the question of the ethics of the creation of new works, works in which a reliably realised creativity is or should be included. Newly created works, such as photographs, films, textual studies, literary works, paintings, graphics, sculptures, architectural designs, technical and other innovations, computer programmes, patents, etc., contain the element of new solutions, concepts, innovation, etc., which are the result of human creativity. However, in the context of thousands of years of evolution of human abilities and creativity, it is only relatively recently that man has begun to assist himself in the processes of creative creation of something new, innovative solutions, new concepts, artistic works, etc., assisted by advanced technology that does this in principle for man, but according to assumptions and rules that man determines. In recent years, the aforementioned processes of using artificial intelligence in the creation of a kind of "works" created with the application of more and more data and information and within the framework of processes that are becoming more and more automated have been taking place at an increasingly rapid pace. The development of the ChatGPT intelligent language model technology, which is available on the Internet, shows how dynamically the use of new technology is taking place in order to, as it were, cede creative work that requires multi-criteria processing of large amounts of data and in increasingly automated processes. Since, for example, ChatGPT-created texts often lack full descriptions of data sources, source publications, bibliographic descriptions and lack information on the extent of possible plagiarism, the scale of possibilities for copyright infringement is large. Therefore, in the context of thesis texts written by students, essays for course credit at university, the use of a tool such as ChatGPT for this purpose generates serious risks of unreliability of writing this type of work. Therefore, it is necessary to create a system of digital marking of various types of "works" created by various artificial intelligence solutions, i.e. in addition to texts created by artificial intelligence, also the creation of photographs, films, innovations, patents, computer software, new drugs, technical projects, artistic works, etc. Such a system of digital marking of various types of "works" created by various artificial intelligence solutions will be helpful in the matter of distinguishing the effects of human work from the increasingly highly substitutable effects of advanced data processing carried out by artificial intelligence. In addition to this, computerised anti-plagiarism platforms and programmes should be improved in such a way that they diagnose the borrowing of text fragments, sentences, paragraphs, phrases from other texts, publications, articles, books, etc., and unattributed sources of data, information, formulas, formulas, models, definitions of new concepts, new concepts, projects, innovative solutions, etc., unattributed bibliographies. Therefore, the currently developed artificial intelligence solutions, such as ChatGPT and similar solutions, should be improved both from the technical and procedural side, as well as from the formal and legal side, thanks to which the scale of improper use of such tools, generating negative effects, will be significantly reduced, including, among others, the scale of unreliable writing of texts of journal and other articles, theses, descriptions of conducted research, results of conducted analyses, etc. In this way, by significantly reducing the scale of negative effects of the developed artificial intelligence applications, the possibilities of practical application of artificial intelligence in the scope of improving the performance of research, analytical and development works, research and development works, as well as the results of conducted analyses, etc., will be limited. In this way, significantly reducing the scale of negative effects of developed applications of artificial intelligence, also developed in universities, the possibilities of practical applications of artificial intelligence in improving the implementation of research, analytical, research and development work, descriptions of results of conducted research will be able to be developed in the future. In this way, effectively, artificial intelligence technologies can be used in universities so that the development of artificial intelligence technologies, of which ChatGPT is an example, does not pose a threat to universities, but rather that it is an increase in opportunities for the development of universities, the development of scientific research, including the improvement of the efficiency of conducted research and analytical processes with the use of large amounts of data processed multi-criteria on computerised Big Data Analytics platforms.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How effectively can artificial intelligence technologies be used in universities so that the development of artificial intelligence technologies as exemplified by ChatGPT does not pose a threat to universities but rather that it is an increase in the possibilities for the development of universities, the development of scientific research, including the improvement of the efficiency of the conducted research, analytical, teaching, scientific processes using large amounts of data processed multi-criteria on computerised Big Data Analytics platforms?
And what is your opinion on this?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
Artificial intelligence can find application in improving the processes of scientific research conducted and/or analytical processes in which, for example, large data sets are analyzed. In this regard, combining Big Data Analytics with certain artificial intelligence solutions, i.e. artificial neural network-based deep learning, can generate significant improvements in the efficiency of conducted research and analytical processes.
What is your opinion on this issue?
Best wishes,
Dariusz Prokopowicz
  • asked a question related to Big Data
Question
9 answers
Can Artificial Intelligence and Big Data Analytics help in the development of sustainable organic agriculture based on planning, arranging and managing biodiverse, multi-species crop agriculture?
In your opinion, can the new technologies of Industry 4.0, including especially artificial intelligence, machine learning, deep learning applied in combination with big data, information and knowledge collected and processed on Big Data Analytics platforms help the development of sustainable organic agriculture based on planning, arranging and managing biodiverse, multi-species crops of agricultural crops?
The process of planning, designing, arranging sustainable crops of agricultural crops grown according to the formula of organic agriculture, which aims to restore highly sustainable, biodiverse natural agricultural ecosystems, should take into account many factors that are a mix of natural biotic, climatic, geological and abiotic factors and changes in these factors that have taken place over the last centuries or millennia within the framework of the development of human, unsustainable civilisation, the development of a robber economy based on intensive industrial development with ignoring the issue of negative externalities towards the surrounding environment.
Considering how this should be a complex, multifaceted process of planning, designing, managing and restoring highly sustainable biodiverse forest and sustainable agricultural ecosystems, the application in this process of new generations of Industry 4.0 technologies, including, above all, artificial intelligence based on large sets of data, information and knowledge concerning many different aspects of nature, ecology, climate, civilisation, etc. collected and processed on Big Data Analytics platforms may prove to be of great help.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, can the new technologies of Industry 4.0, including, above all, artificial intelligence, machine learning, deep learning applied in combination with large data sets, information and knowledge collected and processed on Big Data Analytics platforms help the development of sustainable organic agriculture based on the planning, arrangement and management of biodiverse, multi-species crops of agricultural crops?
Can artificial intelligence and Big Data Analytics help in the development of sustainable organic agriculture?
What is your opinion?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
In my opinion, artificial intelligence, big data analytics and other Industry 4.0/5.0 technologies, technologies typical of the current technological revolution, can help develop sustainable organic agriculture based on planning, organizing and managing biodiverse, multi-species crops.
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
  • asked a question related to Big Data
Question
5 answers
In your opinion, can the new technologies of Industry 4.0, including, above all, artificial intelligence, machine learning, deep learning applied in combination with large sets of data, information and knowledge collected and processed on Big Data Analytics platforms, help in the satellite analysis of the rate of biodiversity loss of the planet's different natural ecosystems?
As part of the technological advances that have been taking place in recent years, which are also rapidly advancing as part of the development of ICT information technologies and Industry 4.0, more and more sophisticated analytical instruments and research techniques are being developed to carry out increasingly complex, multifaceted and Big Data-based analyses of the various processes taking place in nature and to obtain increasingly precise results from the research conducted. With the combination of ICT information technology and Industry 4.0 with satellite analysis technology, the analyses of changes in the biodiversity of the planet's various natural ecosystems carried out using satellites placed in planetary orbit are also being improved. Taking into account the negative human impact on the biodiversity of the planet's natural ecosystems that has been taking place since the beginning of the development of the first technological and industrial revolution, and especially in the Anthropocene epoch from the mid-20th century onwards, there is a growing need to counteract these negative processes, a need to increase the scale and outlays allocated to the improvement of nature conservation systems and instruments, including the protection of the biodiversity of the planet's natural ecosystems.
Improving nature conservation and biodiversity protection systems also requires cyclic surveys of the state of biodiversity of individual terrestrial and marine natural ecosystems of the planet and analyses of progressive environmental degradation and the rate of biodiversity loss. In the situation of obtaining more precise results of research concerning changes in the state of the natural environment and the rate of loss of biodiversity of particular terrestrial and marine natural ecosystems of the planet occurring in various climate zones, changes in the state of the climate and diagnosing key civilisational determinants generating those changes, it is possible to apply specific actions and systemic solutions within the framework of counteracting negative processes of degradation of the natural environment and loss of biodiversity within the framework of improving nature protection techniques more effectively and adapted to the specific nature of a given local biosphere, climate conditions, diagnosed processes of the aforementioned changes but also economic factors. In this connection, the technology of artificial intelligence, which has been developing particularly rapidly in recent years, can also prove helpful in the process of improving the planning, design, management and restoration of natural ecosystems, taking into account a high degree of sustainability, biodiversity and naturalness, i.e. the restoration of natural ecosystems that existed in a specific area centuries ago. In the process of the aforementioned restoration of sustainable, highly biodiverse terrestrial and marine natural ecosystems of the planet, many primary factors must also be taken into account, including geological and climatic factors as well as the modifications previously applied to the area by man concerning geology, land irrigation, drainage, microclimate, soil quality, environmental pollution, the presence of certain invasive species of flora, fauna, fungi and microorganisms. Therefore, the process of planning, design, management and restoration of biodiverse natural ecosystems should take into account many of the above-mentioned factors that are a mix of natural biotic, climatic, geological and abiotic factors and changes in these factors that have taken place over the last centuries or millennia, i.e. changes and side-effects of the development of human, unsustainable civilisation, the development of a robber economy based on intensive industrial development with ignoring the issue of negative externalities towards the surrounding natural environment.
Considering how this should be a complex, multifaceted process of planning, designing, arranging and restoring the planet's biodiverse, natural ecosystems, the application in this process of the new generations of Industry 4.0 technologies, including, above all, artificial intelligence based on large sets of data, information and knowledge concerning many different aspects of nature, ecology, climate, civilisation, etc., collected and processed on Big Data Analytics platforms, can be of great help. On the other hand, artificial intelligence technology combined with satellite analytics can also be of great help in improving research processes aimed at investigating changes in the state of the planet's biosphere, including analysis of the decline in biodiversity of individual ecosystems occurring in specific natural areas and precise diagnosis of the rate of the aforementioned negative changes resulting in environmental degradation and the key determinants causing specific changes.
I will write more about this in the book I am currently writing. In this monograph, I will include the results of my research on this issue. I invite you to join me in scientific cooperation on this issue.
Counting on your opinions, on getting to know your personal opinion, on an honest approach to discussions in scientific problems, and not on ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, can the new technologies of Industry 4.0, including especially artificial intelligence, machine learning, deep learning applied in combination with large datasets, information and knowledge collected and processed on Big Data Analytics platforms help in the satellite analysis of the rate of biodiversity loss of the planet's various natural ecosystems?
Can artificial intelligence and Big Data Analytics help in the satellite analysis of the rate of biodiversity loss of the planet's different natural ecosystems?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to discussing scientific issues and not ChatGPT-generated ready-made answers, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
In my opinion, thanks to the combination of the above-mentioned technologies (artificial intelligence, Big Data Analytics, satellite technologies, broadband data transfer, etc.), there are new opportunities to analyze changes in the state of nature, changes in biodiversity loss, climate change and the impact of these changes on the state of the planet's natural ecosystems.
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
  • asked a question related to Big Data
Question
2 answers
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence and other Industry 4.0 technologies, is it possible to significantly improve the predictive analyses of various multi-faceted macroprocesses?
By combining the technologies of quantum computers, Big Data Analytics, big data analytics and information extracted from e.g. large numbers of websites and social media sites, cloud computing, satellite analytics etc. and artificial intelligence in joint applications for the construction of integrated analytical platforms, it is possible to create systems for the multi-criteria analysis of large quantities of quantitative and qualitative data and thus significantly improve predictive analyses of various multi-faceted macro-processes concerning local, regional and global climate change, the state of the biosphere, natural, social, health, economic, financial processes, etc.?
Ongoing technological progress is increasing the technical possibilities of both conducting research, collecting and assembling large amounts of research data and their multi-criteria processing using ICT information technologies and Industry 4.0. Before the development of ICT information technologies, IT tools, personal computers, etc. in the second half of the 20th century as part of the 3rd technological revolution, computerised, semi-automated processing of large data sets was very difficult or impossible. As a result, the building of multi-criteria, multi-article, big data and information models of complex macro-process structures, simulation models, forecasting models was limited or practically impossible. However, the technological advances made in the current fourth technological revolution and the development of Industry 4.0 technology have changed a lot in this regard. The current fourth technological revolution is, among other things, a revolution in the improvement of multi-criteria, computerised analytical techniques based on large data sets. Industry 4.0 technologies, including Big Data Analytics technology, are used in multi-criteria processing, analysing large data sets. Artificial Intelligence (AI) can be useful in terms of scaling up the automation of research processes and multi-faceted processing of big data obtained from research.
The technological advances taking place are contributing to the improvement of computerised analytical techniques conducted on increasingly large data sets. The application of the technologies of the fourth technological revolution, including ICT information technologies and Industry 4.0 in the process of conducting multi-criteria analyses and simulation and forecasting models conducted on large sets of information and data increases the efficiency of research and analytical processes. Increasingly, in research conducted within different scientific disciplines and different fields of knowledge, analytical processes are carried out, among others, using computerised analytical tools including Big Data Analytics in conjunction with other Industry 4.0 technologies.
When these analytical tools are augmented with Internet of Things technology, cloud computing and satellite-implemented sensing and monitoring techniques, opportunities arise for real-time, multi-criteria analytics of large areas, e.g. nature, climate and others, conducted using satellite technology. When machine learning technology, deep learning, artificial intelligence, multi-criteria simulation models, digital twins are added to these analytical and research techniques, opportunities arise for creating predictive simulations for multi-factor, complex macro processes realised in real time. Complex, multi-faceted macro processes, the study of which is facilitated by the application of new ICT information technologies and Industry 4.0, include, on the one hand, multi-factorial natural, climatic, ecological, etc. processes and those concerning changes in the state of the environment, environmental pollution, changes in the state of ecosystems, biodiversity, changes in the state of soils in agricultural fields, changes in the state of moisture in forested areas, environmental monitoring, deforestation of areas, etc. caused by civilisation factors. On the other hand, complex, multifaceted macroprocesses whose research processes are improved by the application of new technologies include economic, social, financial, etc. processes in the context of the functioning of entire economies, economic regions, continents or in global terms.
Year on year, due to technological advances in ICT, including the use of new generations of microprocessors characterised by ever-increasing computing power, the possibilities for increasingly efficient, multi-criteria processing of large collections of data and information are growing. Artificial intelligence can be particularly useful for the selective and precise retrieval of specific, defined types of information and data extracted from many selected types of websites and the real-time transfer and processing of this data in database systems organised in cloud computing on Big Data Analytics platforms, which would be accessed by a system managing a built and updated model of a specific macro-process using digital twin technology. In addition, the use of supercomputers, including quantum computers characterised by particularly large computational capacities for processing very large data sets, can significantly increase the scale of data and information processed within the framework of multi-criteria analyses of natural, climatic, geological, social, economic, etc. macroprocesses taking place and the creation of simulation models concerning them.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is it possible, by combining the technologies of quantum computers, Big Data Analytics, big data analytics and information extracted from, inter alia, a large number of websites and social media portals, cloud computing, satellite analytics, etc., and artificial intelligence in joint applications of building integrated analytical platforms? and artificial intelligence in joint applications for the construction of integrated analytical platforms, is it possible to create systems for the multi-criteria analysis of large quantities of quantitative and qualitative data and thereby significantly improve predictive analyses of various multi-faceted macro-processes concerning local, regional and global climate change, the state of the biosphere, natural, social, health, economic, financial processes, etc.?
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence and other Industry 4.0 technologies, is it possible to significantly improve the predictive analyses of various multi-faceted macroprocesses?
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence, is it possible to improve the analysis of macroprocesses?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Dariusz Prokopowicz
Relevant answer
Answer
In my opinion, thanks to the combination of the above-mentioned technologies, there are new opportunities to expand research and analytical capabilities, to process large data sets within the framework of Big Data Analytics, to develop predictive models for various types of macro-processes.
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
  • asked a question related to Big Data
Question
2 answers
Hello everyone,
I am Danillo Souza, and I am currently a Post-Doc Researcher at Basque Center for Applied Mathematics (BCAM). I am currently working on the Mathematical, Computational and Experimental Neuroscience Group (MCEN). One of the challenges of my work is to derive optimal tools to exact topological and/or geometrical information from Big data.
I am trying to submit a work to arXiv and unfortunately, an endorsement in Physics - Data Analysis and Statistics is required. I was wondering if some researcher could be my endorser in this area.
Beforehand, I appreciate your efforts in trying to help me.
With kind regards,
Danillo
Danillo Barros De Souza requests your endorsement to submit an article to the physics.data-an section of arXiv. To tell us that you would (or would not) like to endorse this person, please visit the following URL: https://arxiv.org/auth/endorse?x=UOKIX3 If that URL does not work for you, please visit http://arxiv.org/auth/endorse.php and enter the following six-digit alphanumeric string: Endorsement Code: UOKIX3
Relevant answer
Answer
Publish your paper for free
_________________________
Dear Researchers and postgraduate students
MESOPOTAMIAN JOURNAL OF BIG DATA (MJBD) issued by Mesopotamian Academic Press, welcomes the original research articles, short papers, long papers, review papers for the publication in the next issue the journal doesn’t requires any publication fee or article processing charge and all papers are published for free
Journal info.
1 -Publication fee: free
2- Frequency: 1 issues per year
3- Subject: computer science, Big data, Parallel Processing, Parallel Computing and any related fields
4- ISSN: 2958-6453
5- Published by: Mesopotamian Academic Press.
6- Contact: email: [email protected]
Managing Editor: Dr. Ahmed Ali
The journal indexed in
1- Croosref
2- DOAJ
3- Google scholar
4- Research gate
  • asked a question related to Big Data
Question
3 answers
Dear Scholars, Researchers, and Academics, We are pleased to announce a Call for Papers for the upcoming Special Issue to be hosted by the Mesopotamian Academic Press. This prestigious event is dedicated to fostering intellectual exchange and advancing scholarship in the field of Computer Science. The Mesopotamian Academic Press takes pride in its commitment to nurturing the academic community by providing a platform for thought-provoking discussions and interdisciplinary collaborations. We invite contributions from scholars, researchers, and academics working in various disciplines, such as Big Data, Cybersecurity, Information Technology , and beyond, to submit their original research papers and engage in lively discussions that delve into the multifaceted dimensions. Sincerely, Mesopotamian Academic Press https://mesopotamian.press/journals/index.php/index/index
Relevant answer
Answer
Publish your paper for free
_________________________
Dear Researchers and postgraduate students
MESOPOTAMIAN JOURNAL OF BIG DATA (MJBD) issued by Mesopotamian Academic Press, welcomes the original research articles, short papers, long papers, review papers for the publication in the next issue the journal doesn’t requires any publication fee or article processing charge and all papers are published for free
Journal info.
1 -Publication fee: free
2- Frequency: 1 issues per year
3- Subject: computer science, Big data, Parallel Processing, Parallel Computing and any related fields
4- ISSN: 2958-6453
5- Published by: Mesopotamian Academic Press.
6- Contact: email: [email protected]
Managing Editor: Dr. Ahmed Ali
The journal indexed in
1- Croosref
2- DOAJ
3- Google scholar
4- Research gate
  • asked a question related to Big Data
Question
2 answers
It is a laborious task to search an extensive library of documents for useful information. With the advancement of big data and smart technologies, could it be feasible to create a smart robot to help scientists read literature? How can this be achieved?
Robot Capabilities: Search and Summarisation. We ask the smart robot a question, and it searches the library of written works and gives us a brief answer.
Relevant answer
Answer
Publish your paper for free
_________________________
Dear Researchers and postgraduate students
MESOPOTAMIAN JOURNAL OF BIG DATA (MJBD) issued by Mesopotamian Academic Press, welcomes the original research articles, short papers, long papers, review papers for the publication in the next issue the journal doesn’t requires any publication fee or article processing charge and all papers are published for free
Journal info.
1 -Publication fee: free
2- Frequency: 1 issues per year
3- Subject: computer science, Big data, Parallel Processing, Parallel Computing and any related fields
4- ISSN: 2958-6453
5- Published by: Mesopotamian Academic Press.
6- Contact: email: [email protected]
Managing Editor: Dr. Ahmed Ali
The journal indexed in
1- Croosref
2- DOAJ
3- Google scholar
4- Research gate
  • asked a question related to Big Data
Question
4 answers
Which new ICT information technologies are most helpful in protecting the biodiversity of the planet's natural ecosystems?
What are examples of new technologies typical of the current fourth technological revolution that help protect the biodiversity of the planet's natural ecosystems?
Which new technologies, including ICT information technologies, technologies categorized as Industry 4.0 or Industry 5.0 are helping to protect the biodiversity of the planet's natural ecosystems?
How do new Big Data Analytics and Artificial Intelligence technologies, including deep learning based on artificial neural networks, help protect the biodiversity of the planet's natural ecosystems?
New technologies, including ICT information technologies, technologies categorized as Industry 4.0 or Industry 5.0 are finding new applications. These technologies are currently developing rapidly and are an important factor in the current fourth technological revolution. On the other hand, due to the still high emissions of greenhouse gases generating the process of global warming, due to progressive climate change, increasingly frequent weather anomalies and climatic disasters, in addition to increasing environmental pollution, still rapidly decreasing areas of forests, carried out predatory forest management, the level of biodiversity of the planet's natural ecosystems is rapidly decreasing. Therefore, it is necessary to engage new technologies, including ICT information technologies, technologies categorized as Industry 4.0/Industry 5.0, including new technologies in the field of Big Data Analytics and Artificial Intelligence in order to improve and scale up the protection of the biodiversity of the planet's natural ecosystems.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How do the new technologies of Big Data Analytics and artificial intelligence, including deep learning based on artificial neural networks, help to protect the biodiversity of the planet's natural ecosystems?
Which new technologies, including ICT information technologies, technologies categorized as Industry 4.0 or Industry 5.0 are helping to protect the biodiversity of the planet's natural ecosystems?
What are examples of new technologies that help protect the biodiversity of the planet's natural ecosystems?
How do new technologies help protect the biodiversity of the planet's natural ecosystems?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Q4 Ans
New Big Data Analytics and Artificial Intelligence (AI) technologies, including deep learning based on artificial neural networks, have become valuable tools in protecting the biodiversity of the planet's natural ecosystems in several ways:
  1. Species Identification and Monitoring:AI-driven image recognition and deep learning algorithms can automatically identify and track species in photos or videos, even in complex natural environments. This aids in wildlife monitoring and population assessment.
  2. Biodiversity Surveys:AI can process vast amounts of ecological data collected from various sources, such as remote sensors and camera traps, to conduct biodiversity surveys. This helps scientists and conservationists gain insights into species diversity and distribution patterns.
  3. Ecosystem Health Assessment:AI can analyze ecological data to assess the health of ecosystems. It can detect changes in vegetation, water quality, and other environmental indicators that may signify ecosystem degradation.
  4. Predictive Modeling:AI and machine learning models can predict changes in biodiversity based on environmental factors. For example, they can forecast shifts in species distribution due to climate change or habitat loss, allowing for proactive conservation measures.
  5. Illegal Activity Detection:AI algorithms can analyze patterns in audio and video feeds to detect illegal activities such as poaching, illegal logging, and fishing. This enables law enforcement agencies to respond more effectively.
  6. Habitat Mapping and Restoration:AI can process satellite and drone imagery to map habitats and assess their quality. This information is essential for habitat restoration and conservation planning.
  7. Data Integration:Big Data Analytics can integrate data from various sources, such as field observations, remote sensing, and genetic data, to provide a comprehensive view of ecosystems. This holistic approach helps in better understanding and managing biodiversity.
  8. Genomic Conservation:AI can assist in genomic research by identifying genetic markers related to species' health and adaptability. This information is crucial for managing and conserving endangered species.
  9. Citizen Science Support:AI-powered platforms can assist citizen scientists in species identification and data collection, making it easier for the public to contribute to biodiversity research.
  10. Rapid Response to Threats:AI can process and analyze data in real-time, allowing for rapid responses to environmental threats or emergencies, such as oil spills or natural disasters, to minimize damage to ecosystems.
  11. Conservation Planning:AI-based optimization algorithms can help conservationists plan protected areas and reserves to maximize the preservation of biodiversity.
  12. Educational Tools:AI-driven educational tools, including virtual reality and augmented reality experiences, can raise awareness about biodiversity and conservation, fostering public engagement and support for conservation efforts.
By leveraging these technologies, researchers and conservationists can gather and analyze data more efficiently, make informed decisions, and implement targeted conservation strategies. This, in turn, enhances our ability to protect and sustain the planet's natural ecosystems and the rich biodiversity they contain.
  • asked a question related to Big Data
Question
3 answers
Which laptop is recommended for data science and managing large datasets among the following options?
  1. MacBook Pro MPHG3 - 2023 (Apple)
  2. ROG Strix SCAR 18 G834JY-N5049-i9 32GB 1SSD RTX4090 (ASUS)
  3. Legion 7 Pro-i9 32GB 1SSD RTX 4080 (Lenovo)
  4. Raider GE78HX 13VH-i9 32GB 2SSD RTX4080 (MSI)
Relevant answer
Answer
  1. this one is also good for large dataset Legion 7 Pro-i9 32GB 1SSD RTX 4080 (Lenovo)
  • asked a question related to Big Data
Question
5 answers
Investigating how big data analytics can aid prosecutors in making informed decisions regarding charging, plea bargaining, and case prioritization, and assessing its impact on prosecutorial discretion and efficiency.
Relevant answer
Answer
The advent of big data analytics holds transformative potential for multiple sectors, the criminal justice system being a notable candidate. Within this landscape, prosecutorial authorities stand to gain invaluable insights that can profoundly inform the multifaceted decision-making processes of charging, plea negotiation, and case prioritization. Notwithstanding its promise, the integration of big data analytics is replete with concomitant challenges and ethical quandaries that warrant meticulous scrutiny.
Advantages
  1. Risk Assessment: Leveraging big data analytics allows for a nuanced evaluation of risks associated with a suspect, encompassing probabilities of reoffending. This quantitative insight can enable prosecutors to formulate judicious choices regarding charging, plea arrangements, and sentencing recommendations.
  2. Case Prioritization: Analytical systems may scrutinize each case's merits via a multidimensional analysis that incorporates variables such as the robustness of evidence, societal interest, and consequential social impact. This can result in a more efficacious allocation of prosecutorial resources.
  3. Optimization of Plea Negotiations: Employing data analytics for historical analysis of prior plea bargains and their resultant outcomes can provide a template for optimizing future negotiations, thereby streamlining the plea bargaining process.
  4. Predictive Analytics: Utilization of predictive models based on historical data sets can equip prosecutors with probabilistic outcomes of proceeding to trial vis-a-vis accepting plea deals, thus serving as an instrumental tool in decision-making.
  5. Trend Monitoring: Analytical tools can facilitate an understanding of crime patterns and trends, enabling prosecutorial authorities to strategically focus their attention and resources on specific classes of crimes.
  6. Resource Allocation: Data-driven models can inform strategies for the efficient deployment of human and material resources, thereby optimizing workload management.
  7. Enhanced Transparency and Accountability: The systemic tracking and analysis of prosecutorial decisions through big data can bolster transparency and accountability, ensuring adherence to principles of justice and fairness.
Obstacles
  1. Data Quality: The veracity of data remains a pivotal concern, as inaccuracies or inherent biases can culminate in misleading insights and consequentially unjust outcomes.
  2. Privacy Concerns: The large-scale collation and analysis of data engender privacy implications, such as unauthorized data collection and misuse, necessitating rigorous oversight.
  3. Ethical and Legal Constraints: The ethical exigencies surrounding issues like racial profiling, socio-economic bias, and other discriminatory practices require vigilant management to maintain the ethical integrity of data usage.
  4. Technical Acumen: The implementation of big data analytics necessitates specialized training for prosecutors and their supporting staff, thus imposing additional resource commitments.
Impact on Prosecutorial Discretion and Efficiency
  1. Standardization versus Discretion: While the deployment of data analytics could engender a more standardized prosecutorial approach, it may concurrently constrain discretionary prerogatives. Striking an optimal balance between these competing considerations is imperative.
  2. Efficiency: The automation inherent in risk assessments and case prioritizations empowers prosecutors to operate more efficiently, enabling them to focus on cases that warrant their specialized expertise.
  3. Fairness and Consistency: The potential exists for data analytics to ameliorate systemic biases and external pressures influencing prosecutorial decisions, thus contributing to a more equitable and consistent legal process.
  4. Public Perception: The judicious utilization of big data analytics has the potential to fortify public trust by enhancing the transparency and evidentiary basis of prosecutorial actions.
  5. Accountability: The analytical tools can facilitate the establishment of an accountability framework by longitudinally tracking decision-making patterns and their subsequent outcomes.
In summation, big data analytics avails an array of technological tools capable of augmenting both the efficiency and fairness of prosecutorial activities. However, its successful implementation is contingent upon cautious planning and an unwavering commitment to ethical considerations.
  • asked a question related to Big Data
Question
6 answers
Data Security Big Data
Relevant answer
Answer
Big Data needs to be secured for several important reasons:
  1. Confidentiality: Big Data often contains sensitive information, including personal, financial, and proprietary data. Ensuring its confidentiality is crucial to prevent unauthorized access and data breaches.
  2. Integrity: Data integrity ensures that the information remains accurate and unaltered. Securing Big Data helps prevent unauthorized modifications or tampering.
  3. Availability: Protecting Big Data ensures that it remains available to authorized users when needed, preventing disruptions that could lead to service outages or downtime.
  4. Privacy: Large datasets can include personal information that, if mishandled or breached, could lead to privacy violations and identity theft.
  5. Regulatory Compliance: Many industries and jurisdictions have regulations that mandate the security and protection of data, such as GDPR in Europe or HIPAA in healthcare. Non-compliance can lead to legal and financial consequences.
  6. Business Reputation: A data breach can significantly damage an organization's reputation and erode customer trust, leading to loss of business and revenue.
  7. Data Monetization: Organizations often use Big Data to gain insights and generate revenue. Securing this data is critical to protect their investment and competitive advantage.
  8. Competitive Advantage: Companies may use Big Data to analyze market trends, customer behaviors, and other insights that can give them a competitive edge. Securing this data safeguards their strategic advantage.
As for national security, yes, Big Data can pose potential threats:
  1. Cybersecurity: Large-scale data breaches and cyberattacks can target critical infrastructure, government systems, and sensitive information, jeopardizing national security.
  2. Espionage: Hostile actors, including foreign governments and hackers, can exploit vulnerabilities to steal classified information, intelligence, and strategic plans.
  3. Disinformation: Manipulating and spreading false information within Big Data can influence public opinion, political stability, and social harmony, impacting a nation's security.
  4. Crime Detection and Prevention: Securing Big Data is essential for law enforcement agencies to effectively analyze and respond to criminal activities and potential threats.
  5. Counterterrorism: Analyzing vast amounts of data can help identify patterns and trends related to terrorism and radicalization, aiding counterterrorism efforts.
  6. Geopolitical Intelligence: Analyzing data can provide insights into global events, potential conflicts, and emerging threats that affect a nation's security interests.
  7. Border Security: Big Data analytics can enhance border security by analyzing travel patterns, identifying potential threats, and managing immigration and customs processes.
securing Big Data is crucial to protect individuals' privacy, organizations' interests, and national security. The potential for data breaches, cyberattacks, and misuse of data underscores the importance of robust security measures and practices.
  • asked a question related to Big Data
Question
3 answers
What kind of innovative startups do you think can be created using a new generation of smart tools similar to ChatGPT and/or whose business activities would be helped by such smart tools and/or certain new business concepts would be based on such smart tools?
There is a growing body of data suggesting that innovative startups may be created using the next generation of ChatGPT-like smart tools and/or whose business activities would be helped by such smart tools and/or certain new business concepts would be based on such smart tools. On the one hand, there are already emerging Internet startups based on artificial intelligence systems specialized in specific areas of creating textual, graphic, video, etc. elaborations that are variants of something similar to ChatGPT. On the other hand, arguably, some of these kinds of solutions may in the future turn into a kind of online business advisors generating advice for entrepreneurs developing new innovative startups.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What kind of innovative startups do you think could be developed using a new generation of smart tools similar to ChatGPT and/or whose business activities would be helped by such smart tools and/or certain new business concepts would be based on such smart tools?
What kind of innovative startups can be created based on the next generation of ChatGPT-like smart tools?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Counting on your opinions, on getting to know your personal opinion, on a fair approach to the discussion of scientific issues, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Answer
Innovative startups can emerge in AI-driven customer service, virtual personal assistants, content generation, and language translation. Think of a "ChatGPT Café" ☕ where AI baristas craft witty conversations, making your latte and linguistics equally frothy! 🚀🤖
  • asked a question related to Big Data
Question
1 answer
Big Data is veru useful, what is its significance in enewable Energy?
Relevant answer
Answer
Big data can help with the identification for improving effectiveness with renewable resources including the reduction of waste. Which in turn allows for more profits for businesses using renewable energy. Also, this makes the company have a more competitive edge.
  • asked a question related to Big Data
Question
3 answers
I have looked at data base management and applications, data-sets and their use in different contexts. I have looked at digital in general, and I have noticed that there seems to be a single split:
-binary computers, performing number crunching (basically), and behind this you find the Machine Learning, ML, DL, RL, etc at the root fo the current AI
-quantum computing, still with numbers as key objects, with added probability distributions, randomisation, etc. This deviates from deterministic binary computing but only to a certain extent.
Then, WHAT ABOUT computing "DIRECTLY ON SETS", instead of "speaking of sets" and actually only "extracting vectors of numbers from them"? We can program and operate with non-numerical objects, old languages like LISP and LELISP, where the basic objects are lists of characters of any length and shape have done just that decades ago.
So, to every desktop user of spreadsheets (the degree-zero of data-set analytics) I am saying: you work with matrices, the mathematical name of tables of numbers, you know about data-sets, and about analytics. Why would not YOU put the two together: sets are flexible. Sets are sometimes are incorrectly named "bags" because it sounds fashionable (but bags have holes, they may be of plastic, not reusable, sets are more sustainable, math is clean -joking). It's cool to speak of "bags of words", I don't do that. Sets, why? Sets handle heterogeineity, and they can be formed with anything you need them to contain, in the same way a vehicle can carry people, dogs, potatoes, water, diamonds, paper, sand, computers. Matrices? Matrices nicely "vector-multiply", and are efficient in any area of work, from engineering to accounting to any science or humanities domain. They can be simplified in many cases (eigenvector, eigenvalue, along some geometric directions operations get simple, sometimes the change of reference vectors gives a diagonal matrix with zeros everywhere except on the diagonal, by a simple change of coordinates (geometric transformation).
HOW DO WE DO THAT IN PRACTICE? Compute on SETS NOT ON NUMBERS? One can imagine the huge efficiencies gained in some domains, potentially (new: yet to be explored, maybe BY YOU? IN YOUR AREA). Here is the math, simple, it combines knowledge of 11 years old (basic set theory) and knowledge of 15 years old (basic matrix theory). SEE FOR YOURSELF ,and please POST YOUR VIEW on where and how to apply...
Relevant answer
Answer
Am in line with Aparna Sathya Murthy There are different levels of computing or computational methods.Number crunching is helpful for and used in any industry.Data crunching commonly involves stripping out unwanted information and formatting, as well as cleaning and restructuring the data. Analyzing large amounts of information can be invaluable for decision-making, but companies often underestimate the amount of effort required to transform data into a form that can be analyzed. Even accounting is much more than number crunching.
Computers are like humans - they do everything except think.
John von Neumann
  • asked a question related to Big Data
Question
2 answers
Recent data suggests that human activities are causing significant changes to our beloved Earth. However, it is worth noting that many countries in Asia, Europe, and other regions are experiencing low birth rates. This trend indicates a potential decrease in the world population in the future.
Do you believe this could contribute to the long-term sustainability of the Earth?
Relevant answer
Answer
Dear Mr. Ali:
Some researchers at the beggining of the sustainability debates stressed this idea (e.g., the Population Bomb of Paul and Anne Ehrlich), focusing the demographic debate as one of the central points (Ehrlich and Ehrlich, 1968) in the famous IPAT debate (Ehrlich and Holdren, 1971). Many authors considered this debate disregarded the complexity of social estructures, and put the burden of the problem on the poor. In fact, World3 model of the MIT-Limits to Growth study (Meadows et al. 1972) seemed to suggest that the decline of population was one of the issues associated to the scenarios of unlimited economic growth, not the cause.
Following the idea that demographic dynamics and social behaviour are complex, since not all the people has the same standard of living, I believe that the problem cannot be just reduced to a geneal decrease in population.
For example, not all the people accounts for the same responsability in the degradation of ecosystems. Here, in Western countries, one person accounts for more energy and materials consumption than one in other countries of the Global South (There are a lot of works accounting for the social metabolism in different countries), so the environmental consequences associated are higher.
In the case of a global population reduction, it is not the same if this reduction will arrive in the Global South, where the standard of living of the people is relatively low, or in the Global North, where the standar of living is higher. Also, the people from the Global South needs more resources to reduce hunger or poverty, to increase health, etc.
This is a complex issue, but an interesting question.
  • asked a question related to Big Data
Question
1 answer
Recently there's more inclination to use big data available in the public domain to measure the poverty rate of individuals and households. One of these data is Facebook data. But there is debate about whether such aggregate and remotely indirect indicators can accurately predict the poverty status of individuals and households.
Relevant answer
Answer
The first thing that comes my mind is the number of people who are too poor to afford access to FaceBook. So, you need some reason to define FaceBook users as your population of interest, along with interesting research questions about the rate of poverty in that particular population.
  • asked a question related to Big Data
Question
2 answers
..
Relevant answer
Answer
MapReduce is a programming model and framework designed to process and analyze large volumes of data in parallel across a distributed computing cluster. It is commonly used in big data processing tasks where data is too large to be processed on a single machine. Here are some scenarios where using MapReduce with big data is beneficial:
  1. Large-scale data processing: MapReduce is well-suited for processing massive volumes of data that cannot fit in memory or be processed on a single machine. It provides an efficient way to distribute the workload across a cluster of machines, enabling parallel processing and faster execution times.
  2. Batch processing: MapReduce is primarily used for batch processing tasks where data is processed in bulk rather than real-time. It is commonly employed in scenarios such as log analysis, data extraction, transformation, and loading (ETL) processes, or generating reports from large datasets.
  3. Unstructured or semi-structured data: MapReduce can handle unstructured or semi-structured data formats, such as text files, XML, JSON, or log files. It allows you to apply custom map and reduce functions to extract relevant information, perform aggregations, or apply transformations on the data.
  4. Data-intensive computations: MapReduce is useful for performing complex computations on large datasets. It enables parallel execution of operations like filtering, sorting, counting, aggregating, or calculating statistical measures across distributed data partitions.
  5. Scalability and fault tolerance: MapReduce provides built-in scalability and fault tolerance. It can handle the failure of individual nodes in the cluster and automatically redistribute the work to other available nodes, ensuring the reliability and resilience of big data processing jobs.
  6. Distributed storage systems: MapReduce is commonly used in conjunction with distributed file systems like Hadoop Distributed File System (HDFS) or cloud-based storage systems. These systems allow the data to be distributed across multiple nodes, providing efficient data access and reducing data transfer overhead.
It's worth noting that with advancements in big data processing frameworks, such as Apache Spark, which provides a more flexible and performant alternative to MapReduce, the use of MapReduce has somewhat diminished. However, MapReduce still remains relevant for specific use cases, especially in legacy systems or scenarios where Hadoop MapReduce is the preferred choice.
  • asked a question related to Big Data
Question
6 answers
What are the specific problems to those neural network architectures when it comes down to working with big data?
Relevant answer
Answer
Georgi Hristov When working with big data, recurrent neural networks (RNNs), convolutional neural networks (CNNs), and generative adversarial networks (GANs) can face specific challenges. Here are some of the problems encountered by these neural network architectures in the context of big data:
1. Computational complexity: Big data often implies a significant increase in the volume and complexity of the data. RNNs, CNNs, and GANs require extensive computational resources to process and analyze large datasets. Training and inference times can be considerably longer, requiring high-performance hardware or distributed computing systems.
2. Memory limitations: Large datasets may not fit entirely into the memory available for training or inference. RNNs, CNNs, and GANs typically require storing intermediate computations, model parameters, and gradients, which can exceed memory capacities. Handling memory limitations becomes crucial to ensure efficient processing.
3. Overfitting: Big data can still suffer from overfitting, where models become overly specialized to the training data and fail to generalize well to unseen examples. This issue is especially relevant when training deep neural networks on vast amounts of data. Regularization techniques, such as dropout or weight decay, may be needed to mitigate overfitting.
4. Lack of labeled data: Big datasets might not always have complete or accurate labels, which can hinder supervised learning tasks. RNNs and CNNs often rely on labeled data for tasks like classification or segmentation. Insufficient labeled data can lead to challenges in model training and performance.
5. Training instability: With big data, training neural networks can become more unstable. Gradient updates may oscillate or diverge due to the increased complexity and the potential presence of noisy or misleading patterns in large datasets. Careful selection of optimization algorithms, learning rates, and adaptive learning rate strategies becomes crucial.
6. Data preprocessing and augmentation: Preprocessing big datasets to extract relevant features and ensure data quality can be a time-consuming process. Similarly, data augmentation techniques, which are commonly used to artificially increase the dataset size and enhance model generalization, can become computationally expensive with large volumes of data.
7. Scalability and distributed processing: When dealing with big data, scalability becomes essential. Neural network architectures need to scale efficiently across multiple computing nodes or GPUs to handle the increased workload. Designing distributed training algorithms and ensuring efficient data parallelism or model parallelism is necessary.
These challenges highlight some of the specific problems faced by RNNs, CNNs, and GANs when working with big data. Researchers and practitioners continually work on developing novel techniques and approaches to address these issues and improve the performance and efficiency of neural network models in the context of large-scale datasets.
  • asked a question related to Big Data
Question
3 answers
How has the concept of liberty changed in the era of Bid Data
Relevant answer
Answer
Erika Sanchez The concept of liberty has undergone significant changes in the era of Big Data. Big Data refers to the vast amount of digital information generated through various sources such as social media, internet browsing, online transactions, and sensor networks. This abundance of data has raised important questions regarding individual privacy, surveillance, and the balance between personal freedoms and societal interests.
In the context of Big Data, the concept of liberty encompasses two key aspects: informational privacy and algorithmic decision-making.
1. Informational Privacy: With the exponential growth of data collection and analysis, concerns about privacy have intensified. Big Data analytics often involve extracting valuable insights from individual-level data, which can include personal details, preferences, and behavior patterns. This raises concerns about the potential misuse, unauthorized access, or unintended consequences of data handling. The concept of liberty in this context involves the right to control one's personal information, make informed decisions about its usage, and maintain a level of privacy and autonomy in the digital realm.
2. Algorithmic Decision-Making: Big Data analytics heavily rely on algorithms to process and make sense of the vast amounts of data. These algorithms can have a significant impact on various aspects of our lives, including employment, finance, healthcare, and law enforcement. The concept of liberty here involves the transparency and accountability of these algorithms. It raises questions about the potential biases, discrimination, and lack of human agency in decisions that affect individuals' opportunities and outcomes. Ensuring fairness, explainability, and the ability to challenge or question automated decisions is crucial to preserving individual liberties in the era of Big Data.
Moreover, the concept of liberty also intersects with broader discussions on government surveillance, data protection regulations, and the role of corporations in collecting and utilizing personal data. Striking the right balance between the benefits of data-driven insights and the preservation of individual freedoms is a complex challenge in the era of Big Data.
It is important to note that different perspectives exist on how to navigate the relationship between Big Data and liberty. Some argue that the responsible use of data can enhance individual choices and societal benefits, while others emphasize the need for robust safeguards and regulations to protect privacy and prevent abuse.
Overall, the concept of liberty in the era of Big Data requires thoughtful consideration of individual rights, privacy protection, transparency, and accountability in data collection, analysis, and decision-making processes.
  • asked a question related to Big Data
Question
4 answers
I am a research scholar. The area i choose for phd is Management (Marketing). My supervisor suggested me some key words to find literature, which are (Big Data Analytics and Sustainability). now i am not able to understand in what context i should find the literature. how can i choose a particular area to continue my research in?
please help!
Relevant answer
Answer
To answer your further question,
Run a search on "big data" +analytics +sustainability +"supply chain" +retail
You may also read more widely about what researchers and practitioners mean by words like "big data", "analytics", and "sustainability". They mean very different things to different people.
The major retail chains, and the major wholesale distribution companies, at least in Europe and North America, have been analysing their data for decades with the objective of reducing costs and improving margins. If more efficient packaging reduces plastic waste, or shorter transport routes reduce burned fuel, or more efficient lighting reduces power costs, then it's a bonus, but not the original goal.
  • asked a question related to Big Data
Question
1 answer
,,
Relevant answer
Answer
Hadoop technology plays a vital role in big data analytics by providing scalable storage, distributed processing, cost-effectiveness, flexibility, and fault tolerance. Its ability to handle large volumes of diverse data and execute complex analytics tasks in a distributed manner makes it an essential tool for organizations dealing with big data.
  • asked a question related to Big Data
Question
4 answers
..
Relevant answer
Answer
Dear doctor
I do hope the following words give a notable answer
"Missing values can be handled by deleting the rows or columns having null values. If columns have more than half of the rows as null then the entire column can be dropped. The rows which are having one or more columns values as null can also be dropped."
Sincere Greetings and Regards
Dr.Sundus Fadhil Hantoosh
  • asked a question related to Big Data
Question
3 answers
If you had the opportunity, what artificial intelligence would you design and create to be helpful in the research, analytical, editorial, other work you do in conducting your scientific research and/or describing its results?
In your opinion, how would it be possible to improve the processes of conducted research and analytical work, processing of the results of conducted research through the use of artificial intelligence in combination with certain technologies typical of the current fourth technological revolution, technologies categorised as Industry 4.0, including analytics conducted on large sets of data and information, on Big Data Analytics platforms?
The development of artificial intelligence technologies has accelerated in recent years. New applications of specific information systems, various ICT information technology solutions combined with technologies typical of the current fourth technological revolution, technologies categorised as Industry 4.0, including machine learning, deep learning, artificial intelligence and analytics performed on large data and information sets, on Big Data Analytics platforms, are emerging. Particularly in the field of ongoing research work, where large sets of both qualitative information and large sets of quantitative data are produced, the aforementioned technologies are particularly useful in facilitating analytics, processing, elaboration of research results and their preparation for presentation at scientific conferences and in scientific publications. In the analytics of large quantitative data sets, analytical platforms built using integrated information systems, computers characterised by high performance computing power, equipped with servers, high-capacity memory disks, on which Big Data Analytics platforms are built, are used. On the other hand, artificial intelligence technology can also be useful for aggregating, multi-criteria processing and elaboration of large sets of qualitative information. In addition to this, certain IT applications, including statistical and business intelligence applications, are also useful for processing the results of studies carried out, presenting them in scientific publications, statistically processing large data sets, generating descriptions and drawing graphs based on them. As part of the digital representation of researched, complex, multi-faceted processes, digital twin technology can be useful. Within the framework of improving online data transfer, remote communication conducted between researchers and scientists, for example, Blockchain technology and new cyber security solutions may be helpful.
Probably many researchers and scientists would like to have state-of-the-art ICT information technologies and Industry 4.0. including Big Data Analytics, artificial intelligence, deep learning, digital twins, Business Intelligence, Blockchain, etc. Many researchers would probably like to improve the processes of the research and analytical work carried out, the processing of the results of the research carried out, through the use of artificial intelligence in combination with certain technologies typical of the current fourth technological revolution, technologies categorised as Industry 4.0, including the use of artificial intelligence and analytics carried out on large sets of data and information, on Big Data Analytics platforms.
The construction of modern laboratories, research and development centres in schools, colleges, universities, equipped with the above-mentioned new ICT information technologies and Industry 4.0 is therefore probably an important factor for the development of scientific and research and development activities of a particular scientific institution. However, it is usually limited by the financial resources that schools, colleges, universities are able to allocate for these purposes. However, should these financial resources appear, the questions formulated above would probably be valid. In such a situation, as part of a systemic approach to the issue, the construction of modern laboratories, research and development centres in schools, colleges and universities, equipped with the above-mentioned new information technologies, ICT and Industry 4.0, would also be determined by determining the priority directions of research work, the specific nature of the research carried out in relation to the directions of the teaching process, the mission adopted by the scientific institution in the context of its research, scientific work, the achievement of specific social objectives, etc.
In view of the above, I would like to address the following questions to the esteemed community of scientists and researchers:
In your opinion, how would it be possible to improve the processes of conducted research and analytical work, processing of the results of conducted research through the use of artificial intelligence in combination with certain technologies typical of the current fourth technological revolution, technologies classified as Industry 4.0, including analytics conducted on large sets of data and information, on Big Data Analytics platforms?
If you had the opportunity, what artificial intelligence would you design and create to be helpful in the research, analytical, editorial, other work you carry out as part of your scientific research and/or describing its results?
What artificial intelligence would you design and create to be helpful in the research, analytical, data processing, editorial, other work you are doing?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to the discussion in scientific issues and not the ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
None: I don't need it.
  • asked a question related to Big Data
Question
1 answer
..
Relevant answer
Answer
Dear doctor
The traditional or the ancient concept of database was simple, records were maintained in paper based systems. Things were not very complex, as the number of users/customers were very few or countable.
Traditional database management was very cumbersome and required a large number of people to be involved in the process of collecting and maintaining records of end-users. The records were mostly hand written that too on papers, so it was prone to damage, loss or theft.
However, with an advancement in technology and with growing number of users, the shift gradually moved towards a more robust and secure form of data storage. So databases like Oracle, IBM, MySQL etc. came to our rescue, which shifted towards online web services/databases. This required people using the database to have knowledge about programming languages like Java, .Net, C++ etc. to be able to interact with the backend database.
Big Data Testing can be defined as the procedure that involves examining and validating the functionality of the Big Data Applications. Big Data is a collection of a huge amount of data that traditional storage systems cannot handle.
Testing such a huge amount of data would take some special tools, techniques, and terminologies which will be discussed in the later sections of this article.
regards
Dr.Sundus Fadhil Hantoosh
  • asked a question related to Big Data
Question
1 answer
,,
Relevant answer
Answer
I quoted the following and hope be satisfying the meaning of Big Data Testing
"Big Data Testing is a process that involves looking over and confirming the functionality of Big Data Applications. Big Data is the term for a collection of enormous amounts of data that traditional storage systems cannot handle.
There are several areas in it where its testing strategy is required. There are various types of testing in Big Data projects, such as Database testing, Infrastructure, Performance Testing, and Functional testing. It is defined as a large volume of data, structured or unstructured. Data may exist in any format, like flat files, images, videos, etc."
regards
Dr.Sundus Fadhil Hantoosh
  • asked a question related to Big Data
Question
8 answers
Can artificial intelligence already predict our consumer behaviour and in a short while will it be able to predict which shop we will go to and what we will buy tomorrow?
With the help of artificial intelligence, how can systems for monitoring citizens' consumer behaviour based on GPS geolocalisation and information contained in smartphones be improved?
The lockdowns and national quarantines introduced during the coronavirus pandemic (Covid-19) caused a strong decline in sales and turnover generated in traditionally, physically functioning shops and service establishments. The lockdowns imposed on selected service industries and on traditionally operated trade also resulted in an acceleration of e-commerce, the sale of products and services conducted via the Internet. When the coronavirus pandemic was no longer interpreted in terms of high health and economic risk, a significant proportion of traditionally operated trade and physical service establishments also returned to traditionally operated business, customer service, product or service sales. On the other hand, emerging new ICT and Industry 4.0 solutions are being implemented and support the economic activities of companies, enterprises, service establishments and shops producing and/or offering their products or services in both traditional and Internet-based formats. when the pandemic was considered to be over and did not generate major risks for the economic activities of service establishments and shops, new ICT and Industry 4. 0, including artificial intelligence technologies, are being implemented in information systems to support the sales processes of product or service offerings, including improving tools for activating potential consumers, getting customers interested in new product or service offerings, and encouraging customers to visit stationary shops and service establishments. In this regard, startups have been rapidly developing over the past few years, which, using anonymous mobile user identifiers and accurate location and internet user data available in various applications installed on smartphones, are able to precisely locate where a smartphone user is at any given time and diagnose whether he or she is by chance making a purchase in a specific stationary shop, walking down the street passing by an establishment providing specific services and perhaps considering using those services. In a situation where a technology start-up has data on a specific Internet user downloaded from a number of different Internet applications and, on the basis of this data collected on Big Data Analytics information processing and analysis platforms, has drawn up information-rich characteristics of the interests and purchasing preferences of a kind of digital avatar equivalent to a specific Internet user, then, in combination with analysis of current customer behaviour and GPS-based geolocalisation, it is able to make real-time predictions about the subsequent behaviour and/or purchasing decisions of individual potential customers of specific product or service offerings. Some technology start-ups conducting this kind of analytics based on large sets of customer data and on geolocalisation, use of specific apps and social media available on the smartphone and knowledge of the psychology of consumer behaviour are first able to precisely locate consumers in real time with reference to specific shops, service establishments, etc. They are able to firstly locate consumers in real time and precisely identify specific shops, service providers, etc., and then display information on advertising banners appearing in specific applications on the smartphone about the current offer, including a price or other promotion for a specific product available for sale in the shop where the Internet user and potential customer is currently located. Thanks to this type of technological solutions, more and more often an Internet user available on a smartphone in a situation when he/she is in the vicinity, next to specific stands, shop shelves, specific shops in shopping centres, and is thinking about buying a specific product, then at that moment he/she receives information on the smartphone, an advertisement appears with information on a price or other promotion concerning that particular product or a similar, highly substitutable product. At the aforementioned point in time when the customer is in a specific shop or part of a shop, online advertisements are displayed on his or her smartphone, e.g. on social media, the Google ecosystem, third-party web browsers or other applications that the potential customer has installed on his or her smartphone.
When such technological solutions are complemented by artificial intelligence analysing the consumer behaviour of individual customers of different product and service offers, it is possible to create intelligent analytical systems capable of predicting who will visit a specific shop, when they will do so and what they plan to buy in that shop. Statistically, a citizen has several applications installed in his or her smartphone, which provide the technology-based analytical companies with data about their current location. Therefore, thanks to the use of artificial intelligence, it may not be long before Internet users receive messages, see online advertisements displayed on their smartphones showing the products and services they are about to buy or think about tomorrow. Perhaps the artificial intelligence involved in this kind of analytics is already capable of predicting our consumer behaviour in real time and will soon be able to predict which shop we will go to and what we will buy tomorrow.
In view of the above, I would like to address the following question to the esteemed community of scientists and researchers:
With the help of artificial intelligence, how can monitoring systems for citizens' consumer behaviour based on GPS geolocation and information contained in smartphones be improved?
Can artificial intelligence already predict our consumer behaviour and in a few moments will it be able to predict which shop we will go to and what we will buy tomorrow?
Can artificial intelligence already predict our consumer behaviour?
What do you think about this topic?
What is your opinion on this subject?
Please answer,
I invite you all to discuss,
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Thank you very much,
Best regards,
Dariusz Prokopowicz
  • asked a question related to Big Data
Question
1 answer
,,
Relevant answer
Answer
Big data testing has several advantages, including guaranteeing data quality, strengthening security and compliance, minimizing cost, enhancing system performance, and ensuring scalability and dependability.
Organizations can confirm the quality, completeness, and integrity of huge datasets through thorough testing, providing trustworthy insights for defensible decision-making. In order to ensure effective handling of data quantities and user demands, testing also aids in identifying and resolving performance issues. Sensitive data is protected by compliance and security testing, and cost optimization is made possible by effective infrastructure and storage. In general, big data testing helps businesses to fully utilize their data assets and provide strategic results.
Best!
PS
  • asked a question related to Big Data
Question
2 answers
,,
Relevant answer
Answer
Apache Hadoop Foundation is an open-source independent framework that is used to efficiently store and process large datasets ranging in size from gigabytes to petabytes of data. Instead of using one large computer to store and process the data, Hadoop allows clustering multiple computers to analyze massive datasets in parallel more quickly@Nijad Ahmad.