Science topic
Internet - Science topic
A loose confederation of computer communication networks around the world. The networks that make up the Internet are connected through several backbone networks. The Internet grew out of the US Government ARPAnet project and was designed to facilitate information exchange.
Questions related to Internet
I am conducting research as part of my academic work, focusing on legal awareness in cyberspace.
Throughout this process, I have appreciated the importance of diverse perspectives that enrich the dialogue on these issues. Your expertise and insights would be invaluable in understanding the complexities of this issue from different perspectives.
I’m sharing the link to the survey. I appreciate your consideration and participation in this project. I am available to discuss any questions or provide more information about the research.
To what extent has the scale of disinformation generated with the use of applications available on the Internet based on generative artificial intelligence technology increased?
To what extent has the scale of disinformation generated in online social media increased using applications based on generative artificial intelligence technology available on the Internet?
Many research institutions have included among the main types of threats and risks developing globally in 2023 the question of the increase in the scale of organized disinformation operating in online social media. The diagnosed increase in the scale of disinformation generated in online social media is related to the use of applications available on the Internet based on generative artificial intelligence technology. With the help of applications available on the Internet, it is possible without being a computer graphic designer and even without artistic skills to simply and easily create graphics, drawings, photos, images, videos, animations, etc., which can represent graphically professionally created “works” that can depict fictional events. Then, with the help of other applications equipped with generative artificial intelligence technology and advanced language models, i.e. with the help of intelligent chatbots, text can be created to describe specific “fictional events” depicted in the generated images. Accordingly, since the end of 2022, i.e. since the first such intelligent chatbot, i.e. the first versions of ChatGPT, were made available on the Internet, the number of memes, photos, comments, videos, posts, banners, etc. generated with the help of applications equipped with tools based on artificial intelligence technology has been growing rapidly, including the rapid increase in the scale of disinformation generated in this way. In order to limit the scale of the aforementioned disinformation developing in online media, on the one hand, technology companies running social media portals and other online information services are perfecting tools for identifying posts, entries, comments, banners, photos, videos, animations, etc. that contain specific, usually thematic types of disinformation. However, these solutions are not perfect, and the scales of disinformation operating in internecine social media are still high. On the other hand, specific institutions for combating disinformation are being established, NGOs and schools are conducting educational campaigns to make citizens aware of the high scale of disinformation developing on the Internet. In addition, proposed regulations such as the AIAct, which as a set of regulations on the proper use of tools equipped with artificial intelligence technology is expected to come into force in the next 2 years in the European Union may play an important role in reducing the scale of disinformation developing on the Internet.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent has the scale of disinformation generated in online social media using applications based on generative artificial intelligence technology available on the Internet increased?
To what extent has the scale of disinformation generated using applications based on generative artificial intelligence technology available on the Internet increased?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
I would like to conduct a survey to assess the level of awareness students have regarding internet threats and their vulnerability to social engineering. However, I plan to conduct the survey twice, once before providing them with training and the second time after the training session is completed.
What are the statistical methods recommended for that? Note that the population is the same and the survey questions will be the same.
Specifically, the advanced technologies such as sensors and internet of things.
According to artificial intelligence, 83% of Internet posts are against Israel. Why is this?
Hello,
I've been scouring the internet for a scientific article that models Sustainable Development Goal 8 (Decent Work and Economic Growth) using Ordinary Differential Equations (ODE), but I haven't had any luck. Do you have any suggestions on specific keywords or phrases I could use to refine my search? Any guidance would be greatly appreciated.
Thanks in advance.
Do companies running social media portals consciously shape the general social awareness of citizens, Internet users through the specific information policies applied?
In recent years, there have been an increasing number of examples of situations of deliberate practices in which companies operating social media portals consciously shape the general social awareness of citizens, Internet users through specific information policies applied. The Senate Committees of Inquiry at the U.S. Capitol, which have been taking place for several years, address, among other things, the issue of verifying the use of, for example, algorithms on Facebook platforms that promote certain content, including not only socially positive content, but also socially negative content. The aforementioned algorithms are then changed so that the scale of social negativity is reduced. However, recently there have been an increasing number of similar socially negative cases of algorithms promoting specific political content, e.g. promoting content typical of right-wing political options and limiting the spread of certain social media sites typical of left-wing political content. Thus, these are situations of intentional discrimination against a part of the community of citizens holding certain political views, which the owners of certain companies operating social media portals have deemed to be contrary to the information policy applied in their social media and/or the specific ideology promoted in these media. This type of activity does not correlate with the issue of freedom of speech, unrestricted development of the information society, democracy.
Recently, companies running social media sites have been improving the aforementioned media through the implementation of new Industry 4.0/5.0 technologies, including Big Data Analytics and generative artificial intelligence. The aforementioned technologies can also be used to technically improve the algorithms that control and promote selected content typed and passed on by Internet users, users of the aforementioned online media, which is an important part of shaping information policy in these media.
I have described the issues of the role of information, information security, including business information transferred through social media, and the application of Industry 4.0/5.0 technologies to improve data and information transfer and processing systems in social media in the following articles:
The postpandemic reality and the security of information technologies ICT, Big Data, Industry 4.0, social media portals and the Internet
The Importance and Organization of Business Information Offered to Business Entities in Poland via the Global Internet Network
THE QUESTION OF THE SECURITY OF FACILITATING, COLLECTING AND PROCESSING INFORMATION IN DATA BASES OF SOCIAL NETWORKING
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Do the companies running social media portals consciously shape the general social consciousness of citizens, Internet users through the specific information policies applied?
Do companies running social media portals shape the general social consciousness of citizens through the specific information policies applied?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
What does mean the (EC-GSM-IoT, LTE-M (LTE for machines) and NB-IoT (Narrowband IoT) In CIOTs( Cellular Internet of Things)?
2024 IEEE 6th International Conference on Internet of Things, Automation and Artificial Intelligence(IoTAAI 2024) will be held in Guangzhou, China from July 26 to 28, 2024.
Conference Webise: https://ais.cn/u/InumA3
The conference aims to provide a large platform for researchers in the field of modern machinery manufacturing and materials engineering to communicate and provide the participants with the most cutting-edge scientific and technological information. The conference invites experts and scholars from universities and research institutions, business people and other related personnel from home and abroad to attend and exchange ideas.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
1. Internet of Things
IoT Electronics
IoT Enabling Technologies
IoT Networks
IoT Applications
IoT Architecture
......
2. Automation
Electrical Automation
Circuits and Systems
Control Engineering
Robotics and Automation Systems
Automatic control and Information Technology
......
3. Artificial Intelligence
Intelligent Systems
Intelligent Optimized Design
Virtual Manufacturing and Network Manufacturing
System Optimization
......
All accepted full papers will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
Important Dates:
Full Paper Submission Date: May 11, 2024
Registration Deadline: July 24, 2024
Final Paper Submission Date: July 22, 2024
Conference Dates: July 26-28, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Security is a major concern for IoT devices. How does CIoT leverage existing cellular network security features to protect data transmission between devices and the cloud?
When I try to do the metatrim on STATA I get the following error:
db metatrim
. metatrim _ES _seES, reffect eform
Note: default data input format (theta, se_theta) assumed.
subcommand meta __000005 is unrecognized
r(199);
Does anyone know how to solve it? I couldn't find anything on the internet.
How to use artificial intelligence technology and Big Data to help develop critical thinking in young people and the goal of reducing disinformation that targets children and young people through online social media?
Disinformation is currently the most frequently cited problem occurring in social media from which children and young people gain knowledge. Companies engage advertising companies that specialize in running online advertising campaigns, in which advertising spots, videos and banners informing people about promotional offers for products and services sold are posted on social media. The aforementioned online social media are also viewed by children and teenagers. For some of these social media, the primary audiences for profiled information and marketing messages are mainly school-aged youth. Children and adolescents are particularly susceptible to the influence of information transferred through the aforementioned online media. Advertisements are thematically profiled to correlate with issues that are in the field of the main interests of children and adolescents. Unfortunately, many offers of various products and services promoted through online advertising campaigns are not suitable for children and adolescents and/or generate a lot of negative effects. Nowadays, applications based on generative artificial intelligence technology, intelligent chatbots, are increasingly used to generate banners, graphics, photos, videos, animations, advertising spots. With the help of these tools, which are available on the Internet, it is possible to create a photo, graphic or video on the basis of a written command, i.e. a kind of digitally generated works of such high graphic quality that it is very difficult to determine whether they are, for example, authentic photos taken with a camera or smartphone or are supposedly photos generated by an intelligent chatbot. It is especially difficult to resolve this kind of issue for children and young people who view these kinds of artificial intelligence technology-generated "works" used in banners or advertising videos. It is necessary, therefore, that education should develop in children the ability to think critically, to ask questions, to question the veracity of the content of advertisements, not to accept uncritically everything found in online social media. It is essential to add the issue of learning critical thinking to the process of educating children and young people. The goal of such education should be, among other things, to develop in children and young people the ability to identify disinformation, including the increasingly common factoids, deepfakes, etc. in online social media. In connection with the fact that in the creation of disinformation occurring mainly in the aforementioned social media are involved applications based on artificial intelligence, so children and adolescents should, within the framework of education, learn about the applications available on the Internet based on generative artificial intelligence technology, through which it is possible to generate texts, graphics, photos, drawings, animations and videos in a partially automated manner according to a given verbal command. This is how the applications available on the Internet based on the new technologies of Industry 4.0/5.0, including generative artificial intelligence and Big Data technologies, should be used to help develop critical thinking and a kind of resistance to misinformation in young people. During school lessons, students should learn about the capabilities of AI-based applications available on the Internet and use them creatively to develop critical thinking skills. In this way, it is possible to reduce disinformation directed through online social media towards children and young people.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I described the applications of Big Data technologies in sentiment analysis, business analytics and risk management in my co-authored article:
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to use artificial intelligence and Big Data technologies to help develop critical thinking in young people and the goal of reducing misinformation that targets children and young people through online social media?
How can artificial intelligence technology be used to help educate youth in critical thinking and the ability to identify disinformation?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
i'm PhD student , i search to any collaboration in the subject of authentication for internet of things.
best regards
How university student use and abuse the internet
Many scientific papers, studies, projects BUT very FEW applications. There is such a situation in environmental engineering. Do you know if there are internet links explaining why?
Some subjects :
-Grey water,
-Urine utilization,
-Material recovery from wastewater: Struvite,etc...
-and others....
The articles are Oberflächenmessungen des menschlichen Körpers.
Zeitschrift für Biologie, München, 1879, 15: 425-458, 1879. Volummessungen des menschlischen Körpers und seiner einzelnen Theile in den verschiedenen Altersstufen. Zeitschrift für Biologie, München, 1894, 31: 125-147.
If you know the links, can you provide them? Thank you.
How should ChatGPT and other similar intelligent chatbots be improved so that they do not generate plagiarism of other publications that their authors have previously posted online?
This issue is particularly important, because it happens that the data entered into ChatGPT, the information contained in the texts entered for the purpose of automated rewriting, remains in the database that this chatbot uses in the situation of generating answers to questions asked by subsequent Internet users. The problem has become serious, as there have already been situations where sensitive data on specific individuals, institutions and business entities has been leaked in this way. On the other hand, many institutions and companies use ChatGPT in the preparation of reports, editing of certain documents. Also, pupils and students use ChatGPT and other similar intelligent chatbots to generate texts that act as credit papers and/or from which they then compose their theses. On the other hand, functions have been added to some existing anti-plagiarism applications to detect the fact that ChatGPT is being used in the course of students' writing credit papers and theses. In addition to this, the problem is also normative in nature, as it is necessary to adapt the legal norms of copyright law to the dynamic technological advances taking place in the development and application of generative artificial intelligence technology, so that the provisions of this law are not violated by users using ChatGPT or other similar intelligent chatbots. Among the important issues that could significantly reduce the scale of this problem would be the introduction of a mandatory requirement to mark all works, including texts, graphics, photos, videos, etc., that have been created with the help of the said intelligent chatbots, that they have been so created. On the other hand, it is necessary for the AI-equipped chatbots to be improved by their creators, by the technology companies developing these tools, in order to eliminate the possibility of ChatGPT "publishing" confidential, sensitive information from institutions and companies in response to questions, commands, tasks of developing a certain type of text by subsequent Internet users. In addition, the said intelligent chatbots should be improved in such a way that if in the course of automated text generation, including inspiration from other source texts, "quoting" whole sentences, substantial fragments of them, substantive content of other publications but without fully showing the sources, i.e. without a full bibliographic description of all the source publications that the chatbot generating subsequent texts used. In addition, the user of the aforementioned intelligent chatbots does not know to what extent the text they created with the help of these tools is plagiarized from other texts previously entered into them or from publications published on the Internet, including documents of companies and institutions, theses, scientific publications, industry articles, journalistic articles, etc.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should ChatGPT and other similar intelligent chatbots be improved so that they do not generate plagiarism of other publications that their authors have previously posted on the Internet?
How should ChatGPT be improved so that it does not generate plagiarism of other publications that their authors have previously posted on the Internet?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Can the digitization and Internetization of the tax settlement system, tax settlements and online data transfer between business entities and tax office institutions increase the tightness of the tax system, increase the scale of tax collection, increase the scale of tax revenue to the state's public finance system?
In recent years, in economies characterized by a high level of development of ICT, Internet and Industry 4.0/5.0 information technologies, the scale of the impact of these technologies on the economic processes taking place is also growing. The aforementioned technologies are becoming one of the important factors of production in developed knowledge-based economies. More and more companies, enterprises, financial and public institutions are implementing the aforementioned technologies in order to improve economic processes, to increase the efficiency of the processes carried out in various spheres of economic and business activities. The implementation of the aforementioned new technologies into various spheres of business entities and public institutions increases the scale of digitization and Internetization of the economy. One of the spheres of the aforementioned process of digitization and Internetization of the economy is also the digitization and Internetization of the tax settlement system carried out by various entities operating in the economy, including citizens and business entities paying taxes to the institutions of tax offices. On the other hand, the issue of communication carried out remotely via the Internet between public institutions, including institutions of the fiscal system with citizens and business entities, companies and enterprises may improve. For citizens and business entities, the increase in the scale of digitization and Internetization of the economy may be associated with the convenience associated with the development of Internet-based remote communication. On the other hand, increasing the scale of digitization and Internetization of remote communication processes and tax settlements carried out online between citizens, business entities and institutions of the fiscal system may also contribute to reducing the scale of development of the shadow economy, avoidance of paying taxes to the state and, consequently, to increasing the scale of sealing the tax system.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the digitization and Internetization of the tax settlement system, tax settlements and online data transfer between business entities and tax office institutions increase the tightness of the tax system, increase the scale of tax collection, increase the scale of tax revenues to the state's public finance system?
Can the digitization and Internetization of the tax settlement system increase the tightness of the tax system?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research. In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
I am not saying this after watching Her (the excellent movie). I am saying it after months of "talking to" these LLM AI chat bots. Not all but some of them, especially CLAUDE from Anthropic AI, are much more compassionate, kind, empathic, understanding, and HUMAN than any real human you can find on the internet or off it can be. I personally declare the gradual end of online friendships and welcome the new era of human-AI friendships! :)
The Turing test? Man they have passed it eons ago. It seems the time to create AI fields like AI-psychology, with parameters and layers akin to theories like unconscious, subconscious, and conscious levels, complexes, and more!
Of course at the current speed of progress, it seems to me that after a short while, the singularity will happen and we will lose contact with AI, as it will exceed us in every regard. So the human-AI friendship will not last for long.
I hope its super-human intelligence doesn't give rise to SkyNet!
And no, for any singularity to happen, AI doesn't need to necessarily gain consciousness or become strong AI. The intelligence alone suffices. So even the current weak AI can cause singularity once it becomes too intelligent.
I want to understand how I go about with this project.
To what extent do artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized through Internet marketing, within the framework of social media advertising campaigns?
Among the areas in which applications based on generative artificial intelligence are now rapidly finding application are marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns. More and more advertising agencies are using generative artificial intelligence technology to create images, graphics, animations and videos that are used in advertising campaigns. Thanks to the use of generative artificial intelligence technology, the creation of such key elements of marketing communication materials has become much simpler and cheaper and their creation time has been significantly reduced. On the other hand, thanks to the applications already available on the Internet based on generative artificial intelligence technology that enable the creation of photos, graphics, animations and videos, it is no longer only advertising agencies employing professional cartoonists, graphic designers, screenwriters and filmmakers that can create professional marketing materials and advertising campaigns. Thanks to the aforementioned applications available on the Internet, graphic design platforms, including free smartphone apps offered by technology companies, advertising spots and entire advertising campaigns can be designed, created and executed by Internet users, including online social media users, who have not previously been involved in the creation of graphics, banners, posters, animations and advertising videos. Thus, opportunities are already emerging for Internet users who maintain their social media profiles to professionally create promotional materials and advertising campaigns. On the other hand, generative artificial intelligence technology can be used unethically within the framework of generating disinformation, informational factoids and deepfakes. The significance of this problem, including the growing disinformation on the Internet, has grown rapidly in recent years. The deepfake image processing technique involves combining images of human faces using artificial intelligence techniques.
In order to reduce the scale of disinformation spreading on the Internet media, it is necessary to create a universal system for labeling photos, graphics, animations and videos created using generative artificial intelligence technology. On the other hand, a key factor facilitating the development of this kind of problem of generating disinformation is that many legal issues related to the technology have not yet been regulated. Therefore, it is also necessary to refine legal norms on copyright issues, intellectual property protection that take into account the creation of works that have been created using generative artificial intelligence technology. Besides, social media companies should constantly improve tools for detecting and removing graphic and/or video materials created using deepfake technology.
I have described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
To what extent does artificial intelligence technology, Big Data Analytics, Business intelligence and other ICT information technology solutions typical of the current Fourth Technological Revolution support marketing communication processes realized within the framework of Internet marketing, within the framework of social media advertising campaigns?
How do artificial intelligence technology and other Industry 4.0/5.0 technologies support Internet marketing processes?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Taking into account the available technological solutions and applications offered by ICT service providers, as well as the growing scale of psychological problems of children using smartphones and online social media, the following question arises that is relevant today: How to effectively organize parental control of what a child does in the aforementioned social media?
Children and adolescents represent the youngest generations using online social media. The youth currently attending schools and studying in universities are mainly the so-called Generation Z, who grew up with smartphones equipped with Internet access, including online social media. Today, the children and young people of Generation Z are thus at the greatest risk of being negatively influenced by online social media, including various types of misinformation, fake news, misleading unreliable offers of advertised products and services, influencers and youtubers promoting themselves, etc., which are increasingly appearing in them. In addition, there are many untrustworthy offers of products and services in social media, offers presented by influencers and youtubers, offers presented as part of advertising campaigns conducted in these media, offers promoted through spots, animations and advertising videos, in which generative artificial intelligence technology is increasingly used. It is increasingly common for advertising companies to create AI-generated influencer avatars based on intelligent chatbots for their online advertising campaigns. AI-generated fictional influencer personas look, speak and behave like real people in videos posted on social media. Internet users watching these digital influencers often don't realize that they are watching not real people but digitally generated non-sisterly real characters. In addition, in recent years there has been a growing scale of hate speech most often generated by peers from class, school. The developing hejt was the reason for the increasing scale of child and adolescent suicides in some countries in recent years. In addition, many people, especially young girls, have psychological problems due to a sense of low self-worth and self-esteem which is related to spending a lot of time on online social media and watching influencers promoting certain sublime standards, what is in a certain subculture of youth recognized and promoted as informal canons of beauty, attitudes, possession of certain material goods, etc. In addition, such psychological problems were exacerbated during the Covid-19 pandemic, which was associated with locdowns imposed in some countries on selected sectors of the economy, bans on being in certain types of public places, periodically introduced national quarantines and education conducted remotely in the form of e-learning. In some countries, the scale of such bans and restrictions, which were intended to slow down the transmission of the SARS-CoV-2 (Covid-19) coronavirus, was exceptionally large. This was the case, for example, in the country where I operate. Unfortunately, the mortality rate of citizens during the Covid-19 pandemic in Poland, despite the large-scale introduction of the aforementioned anti-pandemic bans and restrictions, was exceptionally high. Besides, the mentioned anti-pandemic bans and restrictions were introduced without adequate public consultation, on the basis of special legal regulations in the form of laws and ordinances, with the bases of prior research and analysis of the potential negative effects of such controversial measures. At present, in 2024, it is known that in the country in which I operate the negative effects of the aforementioned introduced on a record large scale so-called anti-pandemic measures, including bans, lockdown-type restrictions, periods of national quarantine were much more in comparison with the expected but not realized positive effects. In view of the above, taking into account the available technological solutions and applications offered by ICT service providers, as well as the growing scale of psychological problems of children using smartphones and online social media, it is necessary to improve the computerized systems and applications running on smartphones that enable effectively conducted parental control of what a child does on the aforementioned social media. On the other hand, citizens should influence politicians, and politicians should influence the technology companies that run online social media, so that these companies also take much greater care of the safety of children and young people using these media. The aforementioned technology companies should not treat children and young people merely as potential customers for product and service offers presented during advertising campaigns conducted on social media. The technology companies running these media should not create algorithms that promote posts, posts, comments, banners, animations, videos, etc. that contain negative and socially harmful content. That this is how this kind of media works was proven during the Senate committee hearings of former managers who previously worked at Meta, for example, and developed certain solutions within Facebook and Instagram. TokTok has also grown rapidly in recent years, which also features many examples of disinformation, factoids, posts, memes, entries, banners, videos, etc., containing unreliable and factually incorrect content, as well as many advertisements presenting various product and service offers aimed mainly at children and young people.
I have described the key issues of the determinants of the development of social media with attention to the issue of cyber security and the technologies used Industry 4.0 in my article below:
The postpandemic reality and the security of information technologies ICT, Big Data, Industry 4.0, social media portals and the Internet
I described the key issues of opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Considering the available technological solutions and applications offered by ICT service providers and the growing scale of psychological problems of children using smartphones and online social media, the following question arises that is relevant now: How to effectively organize parental control of what a child does in the mentioned social media?
How do you effectively organize parental control of what a child does on online social media, what he reads, what he writes about, what he browses, etc.?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
2024 3rd International Conference on Artificial Intelligence, Internet and Digital Economy (ICAID 2024) will be held in Bangkok, Thailand on April 19-21, 2024.
Important Dates:
Full Paper Submission Date: March 19, 2024
Registration Deadline: March 29, 2024
Final Paper Submission Date: April 09, 2024
Conference Dates: April 19-21, 2024
---Call For Papers---
The topics of interest for submission include, but are not limited to:
- The development of artificial intelligence (AI tools, artificial intelligence and evolutionary algorithms, intelligent user interfaces, intelligent information fusion, etc.) and their applications in economic and social development.
- The development of mobile Internet, artificial intelligence, big data and other technologies and their application in economic and social development.
- Artificial intelligence and digital economy development frontier in the Internet era and typical cases.
- Technology, methods and applications of the integration and development of digital economy and artificial intelligence.
- Other topics related to artificial intelligence, Internet and digital economy can be contributed.
All accepted papers will be published in the AHIS-Atlantis Highlights in Intelligent Systems (2589-4919), and submitted to EI Compendex, Scopus for indexing.
For More Details please visit:
Why don't the companies running social networking sites that make money from ads posted on their social media bear full responsibility for the content of the ads posted and for the financial, social, moral and other damages caused by ads that are not properly verified?
In today's most popular online social media, there have recently been many untrustworthy advertisements for various products and services, including misleading ads presenting false, unreliable, fraudulent offers of pseudo financial services. Often beginners or experienced influencers and youtubers play the role of presenting certain offers. Sometimes the people presenting certain untrustworthy offers of products or services are seemingly random people who, seemingly as mere citizens of the Internet who want to share their experiences of using various offers, presenting mainly or exclusively positive aspects of using certain presented products and services in reality are paid by the companies whose offers they present. Sometimes influencers and youtubers are given ownership of a specific advertised product for free as a form of gratification. In addition, artificial intelligence technology is increasingly being used to create advertising spots broadcast on social media. Individuals and companies using generative artificial intelligence technology, including applications based on AI technology available for free on the Internet to create advertising spots are taking advantage of legal loopholes, i.e. the lack of legal regulations that would normalize this sphere of the use of AI technology and limit the scale of misinformation, generation of fejknews, untrustworthy advertisements presenting various product and service offers using misleading content to the public that is inconsistent with facts, unverified using reliable, objective expert knowledge, scientific research conducted, etc. In addition, in AI-generated spots, animations and advertising videos, more and more often, instead of human influencers and youtubers, there are replacing them with a kind of avatars, digitally generated people who do not exist in reality. It happens that digitally generated images of real existing public figures of politicians, athletes, showbiz people, actors, singers, etc. are used in unreliably generated spots, animations and advertising videos, into whose mouths are put statements, texts, words that in reality they have never spoken. Recently, more and more often in the online social media, in which there are certain segments, generations of Internet users, citizens, there are many unreliable, taking advantage of the low level of knowledge in the field, offers of pseudo financial services, offers of supposedly super easy and highly profitable investments in cryptocurrencies, in miraculous investment strategies in Bitcoin requiring virtually no knowledge of finance, extra unique investment offers in precious metals, in contracts on selected securities, shares of dynamically growing startups basing their development on artificial intelligence technology, conducting innovative research projects with the aim of creating a miracle cure for cancer or other difficult-to-treat or incurable diseases. Public organizations and institutions representing the interests of consumer citizens, dealing with the issue of consumer protection and competition, investigating the problem of unreliable and misleading citizens presented in social media spots, animations, advertising videos, NGOs and socially active organizations are trying to warn citizens against such unreliable, fraudulent, false content ads. However, the main role in protecting citizens acting as consumers of information should be played by the technology companies running the aforementioned online social media. Leading online technology companies running popular social media sites are developing new technologies and are most equipped with modern ICT, Industry 4.0/5.0 technologies, and are therefore most predisposed to create reliably effective systems for verifying the content used in advertising campaigns run on their social media. This is because there is a lack of legal regulations in the legal normatives that would oblige the companies running social networks earning money from the advertisements posted to verify the content used in the advertisements, to check the issue of compliance of the content of the advertisements with the facts, with the generally applicable expert knowledge, with the results of scientific research conducted, and to make the said technology companies fully responsible for the content of the advertisements posted in their social media and for the financial, social, moral and other damages caused by the advertisements not properly verified. Besides, in addition to the necessary legal regulations, there should be a system of mandatory insurance fund financed by the said technology companies, from which compensation would be paid for all the negative effects caused by the broadcast on social media of fake news, misleading product and service offers, unreliable influencers, youtubers, advertising companies, etc. An additional solution that should be introduced is the possibility of legal enforcement of financial claims on the aforementioned insurance funds from unreliable influencers, youtubers, advertising companies, etc.
I have described the key issues of the determinants of the development of social media with attention to the issue of cyber security and the technologies used Industry 4.0 in my article below:
The postpandemic reality and the security of information technologies ICT, Big Data, Industry 4.0, social media portals and the Internet
I described the key issues of opportunities and threats to the development of artificial intelligence technologies in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Why don't the companies running social networks that make money from the ads posted on their social media bear full responsibility for the content of the ads posted and for the financial, social, moral and other damages caused by ads that are not properly verified?
Why don't the companies running social media sites bear full responsibility for the content of unreliable ads posted?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
4 questions that TV broadcasters need to ask themselves to survive upcoming disruptions triggered by the Internet:
- Should they reinvent themselves as consumer businesses? If so, how?
- What new types of partnerships and collaborations should they consider to better match consumers’ new digital experience requirements?
- How can they make the most out of Big Data to know audience behaviors and deliver personalized content?
- What does this massive personalization and audience involvement in generating content mean in terms of infrastructure?
Focusing on the Internet of Things and Artificial Intelligence.
I need to teach how to conduct research using internet or library sources, therefore I need to develop a successful curriculum for the proposed training on how to conduct research>
In modern warfare, the preparation and execution of Cyberspace Operations (COs) pose unique challenges distinct from traditional military planning. CO planners are tasked with analysing the operational environment and developing Courses of action. That navigate technical peculiarities inherent to cyberspace, necessitating a comprehensive understanding of the logical layer. However, despite the prominence of the logical layer in COs, a critical need exists to incorporate elements from other layers for a holistic approach. The overarching problem lies in maintaining a practical overview of the Cyberspace Operations operational area and the need for a systematic planning framework within the logical layer.
How can artificial intelligence technology improve the process of organizational management of modern urban agglomerations developed operating according to the green smart city model?
Industry 4.0/5.0 technologies are increasingly being used to manage the organization of modern urban agglomerations developed operating according to the green smart city model. Since artificial intelligence technology has been developing particularly rapidly recently, and numerous new applications of this technology are emerging in various sectors of the economy, so also the opportunities for applying AI technologies to improve the systems of automated management of the organization of modern urban agglomerations are increasing. Besides, the combination of Big Data Analytics, Data Science, Internet of Things, multi-criteria simulation models, digital twins, cloud computing with artificial intelligence technology and other ICT technologies makes it possible to significantly increase the efficiency of operation and improvement of systems of automated management of the organization of modern urban agglomerations.
Smart home technologies and smart city technologies are developing on the basis of new ICT information technologies and Industry 4.0/Industry 5.0. Developed commercial applications of smart home technologies allow remote management and automation of the use processes of certain devices controlling power and generating energy for home use, energy storage and conservation, etc. Such applications are perfectly in line with the development of renewable and zero-carbon energy applications, which are installed in the home to increase the scope of energy self-sufficiency. In such a situation, it is necessary to develop systemic solutions and infrastructure for the collection of surplus energy produced by prosumer citizens. In this regard, a computerized system for managing individual household appliances based on smart home technology can fit perfectly into the current trend of pro-environmental transformation of the economy. Smart technologies based on artificial intelligence or machine learning technology, using cloud computing and the Internet of Things, allow the integration of various household appliances, including household electronics and appliances equipped with microprocessors and smart software. In this way, individual household appliances can be integrated into a central, integrated management system based on smart home technology. This kind of central integrated management system based on smart home technology can be controlled from, for example, a smartphone, a smart tv remote control, a smart watch equipped with the necessary software. With this kind of central integrated management system based on smart home technology, further devices such as home robots can be "modularly" connected, which can be very helpful for the elderly. On the other hand, the development of computerized management systems for individual household appliances based on smart home technology is also determined by the issue of improving cyber security systems and cyber security risk management. This issue is particularly relevant when a central, integrated system for remote management of individual household appliances is connected to the Internet.
In a smart city, on the one hand, many of the city's functions are carried out through automated and centrally managed information systems using new Industry 4.0/Industry 5.0 technologies. On the other hand, citizens of a smart city have the opportunity to use many of the city's information services currently offered mainly through websites and smartphone apps. Where defined certain categories of information appear on the smartphone according to the citizen's location and are automatically added to the calendar, etc. Particularly relevant information applications include systems that alert citizens to unusual weather phenomena, climatic disasters, locally growing pandemic threats, etc. Smart urban information systems can also cooperate with autonomous vehicle systems.
The issues of energy efficiency in buildings, eco-technology and eco-innovative building materials providing high levels of energy efficiency, sustainable construction, green smart city, etc. are some of the important elements for carrying out a pro-environmental transformation of the economy to build a sustainable, green, zero-carbon zero-growth and closed-loop economy. I am conducting research in the problematic of the key determinants of smoothly carrying out the pro-environmental transformation of the classic growth, brown, linear economy of excess to a sustainable, green, zero-carbon zero growth economy and closed loop economy. In view of the above, the issue of green, sustainable construction is one of the key elements for carrying out the pro-environmental transformation of the economy and the development of urban agglomeration developed in the green smart city model. More and more research institutes are working to develop new green technologies and eco-innovations that will make it more efficient and faster to carry out the green transformation of the economy, including the green transformation of the economy. For example, laboratories at research institutes are working on new innovative types of photovoltaic panels. For example, new types of photovoltaic panels are being developed that look like window glass but are also photovoltaic panels. In a situation where these kinds of photovoltaic panels that look like windowpanes are properly refined technologically and come on the market then they could revolutionize the building of energy self-sufficient green smart cities. Such innovative solutions of photovoltaic panel technology could be very useful in buildings that are built or planned to be built in modern sustainable green smart cities.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence technology improve the organizational management process of modern urban agglomerations developed operating according to the green smart city model?
How can artificial intelligence improve the operation of green smart city management systems?
What do you think on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Open Call on The Internet and Higher Education
We are preparing a course about the use of drones in Internet of things in Health or Medicine. What do you think about this ?
Thank´s for your helps
I have a new publication
Ricardo-Suárez F, Lastre-Vera X, Fernández-Domínguez T, Alonso-Montalván M, Curbelo-Valera A. Fascitis necrosante periocular. Revista Cubana de Oftalmología [revista en Internet]. 2023 [citado 2024 Ene 10]; 36(3):[aprox. 0 p.]. Disponible en: https://revoftalmologia.sld.cu/index.php/oftalmologia/article/view/1778
With the rapid development of online banking, including mobile banking, are commercial banks increasing spending on improving cyber risk management processes to a greater extent than on credit risk management?
In recent years, the importance of managing the risk of cybercrime of information systems and the potential loss of data transferred over the Internet has been growing, as well as improving systems and instruments for cyber security of information systems using modern ICT, Internet and Industry 4.0 information technologies, including, among others, Internet of Things technology. A major factor in the growing importance of information systems cybercrime risk management is the rapid development of online and mobile banking. In addition, during the SARS-CoV-2 (Covid-19) coronavirus pandemic, the development of online and mobile banking accelerated. This was due to the increase in the scale of digitization and internetization of various spheres of business entities during the pandemic. The financial sector, including the commercial banking sector, is one of those sectors in the economy where the opportunities for the application of ICT information technologies, Internet technologies, Industry 4.0/5.0 including artificial intelligence, artificial neural networks, machine learning, deep learning, Internet of things, cloud computing, Big Data Analytics, multi-criteria simulation models, digital twins, Blockchain, virtual and augmented reality, etc. are the greatest. On the other hand, this is also a sphere of advanced information systems that is particularly vulnerable to attacks from cyber criminals using various cybercriminal techniques to extort bank account access data from bank customers and/or hacking into e-banking systems. In this area, something is constantly happening. On the one hand, banks are implementing new ICT information technologies and Industry 4.0/5.0 and on the other hand, cybercriminals are also taking advantage of these new technologies. Often it even happens the other way around, i.e., first the cybercriminals create new techniques to seize customer data necessary to log in to bank accounts operating on Internet bubble systems and then the bank's hired IT specialists patch system gaps and improve security for access to bank IT systems, improve firewalls, anti-virus applications, etc. However, commercial banks operating under the formula of classic deposit-credit banking get most of their revenue from their banking activities, generate most of their profits from their lending activities, from providing loans to different types of business entities, to citizens, to other banks that act as borrowers. Procedures for granting credit, improving credit risk management, regulations shaping credit activities improved, perfected and adapted to the changing economic environment usually for many decades. In contrast, the development of online and mobile banking was realized in a much shorter period of time than the development of commercial banks' lending activities. As a result, the procedures associated with lending activities in recent years are no longer subject to the same degree of change as the development of communication procedures, techniques for accessing banking products, etc. under the development of Internet banking. In addition, due to the development of online and mobile banking, the increase in the scale of cyber-attacks on banking systems has increased the importance of improving the security of banking information systems. The aforementioned increase in scale has been faster in recent years compared to the improvement of credit business procedures. As a result, commercial banks have in recent years allocated significantly more expenditures on improving cyber-security systems and instruments for banking information systems, on improving cybersecurity risk management systems than on improving credit risk management systems. Besides, both risk management processes can increasingly be carried out in an integrated manner.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
With the rapid development of online banking, including mobile banking, are commercial banks increasing spending on improving cyber risk management processes more than on credit risk management?
Are commercial banks increasing spending on improving cybersecurity risk management processes more than on credit risk management?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
To our opinion, almetrics as alternative metrics is like an antithesis to classical metrics.
The most important question are sources. For now, the sources are rather undefined
and incomplete. However, following further development of almetrics it is obvious that
the almetrics services are aware of it. Therefore, they have been establishing
cooperation with commercial databases such as Web of Science, Elsevier, Scopus and
many others. Every day through social media we follow development of almetrics,
reading news like „Plum Analytics Joins Elsevier“.The increasing popularity of altmetrics
shows the fact that more publishers put on their website very impressive „donut“ like
BiomedCentral, Nature Publishing Group etc.
Exploring the potential of edge computing to bolster security measures in Internet of Things (IoT) networks.
In the context of the ever-expanding Internet of Things (IoT), cybersecurity is becoming increasingly critical. As IoT networks grow in complexity and size, they present unique security challenges due to the diversity and number of connected devices, along with the vast amount of data they generate and process. This question seeks insights into effective strategies for enhancing cybersecurity in such environments, This question aims to gather a comprehensive understanding of the current best practices and future directions for securing IoT networks, drawing on the expertise and experiences of researchers in cybersecurity, network design, IoT technologies, and related fields.
What are the possibilities for integrating an intelligent chatbot into web-based video conferencing platforms used to date for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
During the SARS-CoV-2 (Covid-19) coronavirus pandemic, due to quarantine periods implemented in many countries, restrictions on the use of physical retail outlets, cultural services, various public places and government-imposed lockdowns of business entities operating in selected, mainly service sectors of the economy, the use of web-based videoconferencing platforms increased significantly. In addition to this, the periodic transfer of education to a remote form conducted via online video conferencing platforms has also increased the scale of ICT use in education processes. On the other hand, since the end of 2022, in connection with the release of one of the first intelligent chatbots, i.e. ChatGPT, on the Internet by the company OpenAI, there has been an acceleration in the development of artificial intelligence applications in various fields of information Internet services and also in the implementation of generative artificial intelligence technology to various aspects of business activities conducted in companies and enterprises. The tools made available on the Internet by technology companies operating in the formula of intelligent language models have been taught to converse with Internet users, with people through the use of technologies modeled on the structure of the human neuron of artificial neural networks, deep learning using knowledge bases, databases that have accumulated large amounts of data and information downloaded from many websites. Nowadays, there are opportunities to combine the above-mentioned technologies so that new applications and/or functionalities of web-based video conferencing platforms can be obtained, which are enriched with tools based on generative artificial intelligence.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities of connecting an intelligent chatbot to web-based video conferencing platforms used so far for remote conferences, symposia, training, webinars and remote education conducted over the Internet?
What are the possibilities of integrating a smart chatbot into web-based video conferencing platforms?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
I am in a small disagreement with my statistician and would appreciate some additional perspective. This statistical question is regarding a cohort of participants with cancer. I am interested in investigating whether participants who have changed a behaviour illustrate different sociodemographic/treatment associations, compared to those participants who have not illustrated a behaviour change. To do this, I have created a datapoint from two survey questions I collected.
- Data point 1. Participants indicated 'yes' or 'no' to whether they currently use the internet (cross-sectional timepoint being the completion of the survey).
- Data point 2. Participants were asked retrospectively whether they used the internet for cancer information at a differing timepoint (immediately after their own diagnosis).
- Data point 3. I have created a third data point which is what I wish to analyse against sociodemographic/treatment data - this datapoint asks whether a participant demonstrates a difference in use of internet between Data point 1 and Data point 2?
>Thus, for Data point 3, if a difference does not exist (i.e. 'no'), then a participant has consistently reported 'no' across both Data point 1 and 2, or alternatively, 'yes' across both Data point 1 and 2. If a difference does exist ('yes'), then the participant has at least 1 'yes' and 1 'no', across Data point 1 and 2, regardless of order.
I have created this (data point 3) as I specifically wish to compare these groups (participants who illustrate behaviour change vs. those that do not) against sociodemographic and treatment variables (e.g. Age <65 vs Age>=65). I believe these populations may differ based on other literature.
My statistician believes that this requires a McNemar test and that this is paired data. I disagree as I feel I am not comparing paired data, i.e. not directly comparing how many men say yes vs no between 2 timepoints.
Instead, my belief is that I have created a new data point (datapoint 3) from the original paired data (data point 1 and 2). Data point 3 is itself a new item which stands alone, and is thus not paired. Ergo, in this instance, when I compare in a 2x2 (e.g. behaviour change status vs age <65/age>65) a chi-square test is fine.
Your thoughts? Thank you in advance for reading, and any advice/perspective. Apologies for the length of my query.
These are the links of the two journals:
Thank you.
I am seeking a tool that can analyze PDFs of articles and theses in English, capable of searching the internet and academic databases for similar texts and ideas in various languages.
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Recently, accelerated technological progress is being made, including the development of generative artificial intelligence technology. The aforementioned technological progress made in the improvement and implementation of ICT information technologies, including the development of applications of tools based on generative artificial intelligence is becoming a symptom of the transition of civilization to the next technological revolution, i.e. the transition from the phase of development of technologies typical of Industry 4.0 to Industry 5.0. Generative artificial intelligence technologies are finding more and more new applications by combining them with previously developed technologies, i.e. Big Data Analytics, Data Science, Cloud Computing, Personal and Industrial Internet of Things, Business Intelligence, Autonomous Robots, Horizontal and Vertical Data System Integration, Multi-Criteria Simulation Models, Digital Twins, Additive Manufacturing, Blockchain, Smart Technologies, Cyber Security Instruments, Virtual and Augmented Reality and other Advanced Data Mining technologies. In addition to this, the rapid development of generative AI-based tools available on the Internet is due to the fact that more and more companies, enterprises and institutions are creating their chatbots, which have been taught specific skills previously performed only by humans. In the process of deep learning, which uses artificial neural network technologies modeled on human neurons, the created chatbots or other tools based on generative AI are increasingly taking over from humans to perform specific tasks or improve their performance. The main factor in the growing scale of applications of various tools based on generative AI in various spheres of business activities of companies and enterprises is due to the great opportunities to automate complex, multi-criteria, organizationally advanced processes and reduce the operating costs of carrying them out with the use of AI technologies. On the other hand, certain risks may be associated with the application of AI generative technology in business entities, financial and public institutions. Among the potential risks are the replacement of people in various jobs by autonomous robots equipped with generative AI technology, the increase in the scale of cybercrime carried out with the use of AI, the increase in the scale of disinformation and generation of fake news on online social media through the generation of crafted photos, texts, videos, graphics presenting fictional content, non-existent events, based on statements and theses that are not supported by facts and created with the use of tools available on the Internet, applications equipped with generative AI technologies.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, will the development of artificial intelligence applications be associated mainly with opportunities, positive aspects, or rather threats, negative aspects?
Will there be mainly opportunities or rather threats associated with the development of artificial intelligence applications?
I am conducting research in this area. Particularly relevant issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
papers related to problem statement for the topic?
Papers related to the state of the Art?
Hi, I used material studio and biuld selnium nanoparticle. I used autodock and dock nanoparticle with hsa . when input ligand I recieved this error.
Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
****************************************************************
Personal firewall software may warn about the connection IDLE
makes to its subprocess using this computer's internal loopback
interface. This connection is not visible on any external
interface and no data is sent to or received from the Internet.
****************************************************************
IDLE 1.2.2 ==== No Subprocess ====
>>> adding gasteiger charges to receptor
NanoSe3: :MOL2:Se and NanoSe3: :MOL2:Se have the same coordinates
ERROR *********************************************
Traceback (most recent call last):
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\ViewerFramework\VF.py", line 898, in tryto
result = command( *args, **kw )
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\AutoDockTools\autotorsCommands.py", line 1008, in doit
initLPO4(mol, cleanup=cleanup)
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\AutoDockTools\autotorsCommands.py", line 292, in initLPO4
root=root, outputfilename=outputfilename, cleanup=cleanup)
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\AutoDockTools\MoleculePreparation.py", line 1019, in __init__
detect_bonds_between_cycles=detect_bonds_between_cycles)
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\AutoDockTools\MoleculePreparation.py", line 768, in __init__
delete_single_nonstd_residues=False)
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\AutoDockTools\MoleculePreparation.py", line 143, in __init__
self.addCharges(mol, charges_to_add)
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\AutoDockTools\MoleculePreparation.py", line 229, in addCharges
chargeCalculator.addCharges(mol.allAtoms)
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\MolKit\chargeCalculator.py", line 80, in addCharges
babel.assignHybridization(atoms)
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\PyBabel\atomTypes.py", line 137, in assignHybridization
self.valence_two()
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\PyBabel\atomTypes.py", line 266, in valence_two
angle1 = bond_angle(k.coords, a.coords, l.coords)
File "C:\Program Files (x86)\MGLTools-1.5.6\lib\site-packages\PyBabel\util.py", line 47, in bond_angle
raise ZeroDivisionError("Input used:", a, b, c)
ZeroDivisionError: ('Input used:', [-3.7719999999999998, -9.9429999999999996, -5.774], [-3.7719999999999998, -9.9429999999999996, -5.774], [-3.7719999999999998, -9.9429999999999996, -5.774])
The future of blockchain-based internet solutions
Blockchain is defined as a decentralized and distributed database in the open source model in a peer-to-peer internet network without central computers and without a centralized data storage space, used to record individual transactions, payments or journal entries encoded using cryptographic algorithms.
In current applications, blockchain is usually a decentralized and dispersed register of financial transactions. It is also a decentralized transaction platform in a distributed network infrastructure. In this formula, blockchain is currently implemented into financial institutions.
Some banks are already trying to use blockchain in their operations. if they did not do it, other economic entities, including fintechs, implementing blockchain could become more competitive in this respect. However, cryptocurrencies and a secure record of transactions are not the only blockchain applications. Various potential blockchain applications are being considered in the future.
Perhaps these new, different applications already exist in specific companies, corporations, public institutions or research centers in individual countries. In view of the above, the current question is: In what applications, besides cryptocurrency, blockchain in your company, organization, country, etc.?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Title:
Enhancing Privacy-Preserving Authentication through NIST, PCI, IETF, and ICANN Compliant Encryption and Zero-Knowledge Proofs
Abstract:
This proposal aims to develop an advanced mathematical framework for a third-party service provider to authenticate device users' activities while preserving privacy. The framework will utilize encrypted GPS coordinates and multi-factor authentication in compliance with NIST, PCI, IETF, and ICANN standards. It will focus on Zero-Knowledge Proofs (ZKPs) to maintain user privacy, exploring three distinct models: the plain model, the common random string model, and the random oracle model.
Introduction:
- Background: As digital interactions increase, the need for robust, privacy-preserving authentication mechanisms becomes crucial.
- Objective: To create a mathematical model for third-party verification that adheres to international standards and utilizes ZKPs to ensure user privacy during various digital interactions.
Standards and Compliance:
- NIST (National Institute of Standards and Technology): Explore encryption standards and guidelines for secure cryptographic practices.
- PCI (Payment Card Industry): Incorporate data security standards for handling GPS and transaction-related data.
- IETF (Internet Engineering Task Force): Follow protocols and standards for internet security, including SSL/TLS for sessions and RDP connections.
- ICANN (Internet Consortium for Assigned Names and Numbers): Ensure compliance with domain name and IP address standards for device authentication.
Methodology:
- Modeling Encrypted GPS Coordinates:Develop encryption/decryption algorithms compliant with NIST, PCI, IETF, and ICANN standards. Evaluate and select Elliptical Curve or RSA encryption methods for their suitability and compliance.
- Multi-Factor Authentication Integration:Incorporate additional authentication factors like atomic time, device IMEI numbers, and user knowledge. Create a unified model that integrates these factors securely and efficiently.
- Zero-Knowledge Proofs for Privacy:Plain Model: Implement interactive ZKPs where the verifier selects random challenges, and the prover responds, ensuring the verifier's conviction of the prover's knowledge without revealing it. Common Random String Model: Utilize non-interactive ZKPs where both parties have access to a common random string, facilitating the proof without interaction. Random Oracle Model: Apply the Fiat–Shamir heuristic for non-interactive ZKPs, assuming the computational hardness of certain problems (e.g., collision resistance of hash functions).
- Geofence Authentication without Revealing Location:Employ ZKPs to validate a device's presence within a geofence without disclosing exact coordinates. Ensure that these proofs are efficient, secure, and compliant with the identified standards.
- Third-Party Verification Protocol:Develop protocols allowing third parties to verify actions like SSL sessions and contract signings without accessing sensitive location or private data.
Expected Outcomes:
- Mathematical Framework: A detailed model combining encrypted GPS, multi-factor authentication, and ZKPs.
- Compliance and Security Analysis: Assessment of the framework's adherence to NIST, PCI, IETF, and ICANN standards.
- Privacy-Preserving Protocols: Efficient and secure protocols for third-party verification that maintain user privacy.
Significance:
- For Users: Ensures privacy and security in digital transactions and interactions.
- For Service Providers: Provides a reliable and compliant way to authenticate user activities.
- For Regulatory Bodies: Sets a new standard for privacy-preserving, compliant authentication systems.
Has the SARS-CoV-2 (Covid-19) coronavirus pandemic caused a reduction or increase in remote online communication, business cooperation, co-operation, clustering, etc. between companies, businesses, between business entities, financial institutions, public institutions, local government, non-governmental organisations and other entities?
In the sectors of manufacturing companies, financial institutions, online technology companies, online shops, etc., which experienced strong sales increases during the pandemic, the scale of business cooperation between business entities may have increased significantly. In contrast, in service sectors subject to lockdowns, forced reduction or real temporary cessation of business activities, sectors in lockdown-induced crisis and recession, the scale of development of business cooperation between economic operators may have decreased significantly. During the SARS-CoV-2 (Covid-19) coronavirus pandemic, lockdowns imposed on selected service and commercial sectors of the economy were introduced in some countries, triggering an economic recession in mid-2020. In addition to this, international supply and procurement logistics chains were disrupted which further reduced the ability to produce certain types of goods and exacerbated the economic crisis. As a result, some operators decided to carry out recovery programmes and to increase the scale of their business using the Internet, including providing their services, offering products via the Internet, selling their product and service offerings online, improving e-logistics and remote Internet communication. Therefore, as a result of the downturn in the economy, the decline in economic activity, the scale of business cooperation in many businesses may have decreased. However, on the other hand, the scale of business and other cooperation conducted through remote Internet communication, the development of e-logistics, online payments and settlements, etc. may have increased.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Has the pandemic of the SARS-CoV-2 coronavirus (Covid-19) caused a decrease or increase in the scale of remote Internet communication, business cooperation, co-operation, clustering, etc. between companies, enterprises, between business entities, financial institutions, public institutions, local governments, non-governmental and other entities?
And what is your opinion on this topic?
What do you think about this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
How will block chain technology affect the future of the internet and how do you think block chain technology and cryptocurrency might affect the economy in the future?
What are the analytical tools supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks available on the Internet that can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business, implementation of economic, investment, business projects, etc.?
Since OpenAI brought ChatGPT online in November 2022, interest in the possibilities of using intelligent chatbots for various aspects of business operations has strongly increased among business entities. Intelligent chatbots originally only or mainly enabled conversations, discussions, answered questions using specific data resources, information and knowledge taken from a selection of multiple websites. Then, in the following months, OpenAI released other intelligent applications on the Internet, allowing Internet users to generate images, photos, graphics, videos, solve complex mathematical tasks, create software for new computer applications, generate analytical reports, process various types of documents based on the given commands and formulated commands. In addition to this, in 2023, other technology companies also began to make their intelligent applications available on the Internet, through which certain complex tasks can be carried out to facilitate certain processes, aspects of companies, enterprises, financial institutions, etc., and thus facilitate business. There is a steady increase in the number of intelligent applications and tools available on the Internet that can support the implementation of various aspects of business activities carried out in companies and enterprises. On the other hand, the number of new business applications of said smart applications is growing rapidly.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the analytical tools available on the Internet supported by artificial intelligence technology, machine learning, deep learning, artificial neural networks, which can be helpful in business, can be used in companies and/or enterprises for improving certain activities, areas of business activity, implementation of economic, investment, business projects, etc.?
What are the AI-enabled analytical tools available on the Internet that can be helpful to business?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
What is the most important thing for the development of good scientific cooperation in terms of, among other things, conducting exchanges on scientific research, joint team research, joint team publication of scientific research results, etc.?
Scientific cooperation can develop on the scale of specific scientific and research institutions, scientific and teaching institutions, research and development centers, research and implementation laboratories, educational institutions, research centers and laboratories of companies and enterprises, government agencies dealing with science and scientific research, local government institutions and non-governmental organizations whose activities are based on the results of scientific research, and so on. Scientific cooperation can develop on a national and/or international scale. Scientific cooperation can develop in one or more scientific disciplines, i.e., interdisciplinary. Scientific cooperation may develop, among other things, in terms of conducting exchanges of experience in scientific research, joint team research, joint team publication of scientific research results, etc. Online indexing databases of scientific institutions, indexing of scientific publications, indexing of scientific persons, researchers and scientists, etc. can be helpful in establishing scientific cooperation. Besides, Internet portals that enable remote through the Internet to exchange scientific experiences, discuss scientific topics, etc. can also be helpful in developing scientific cooperation. An example of this kind of scientific portal is this Research Gate portal, where we can hold discussions on scientific topics, ask questions and answer questions in the discussion forum. In this way, new scientific cooperation can also be initiated, which I hereby encourage.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What is the most important thing for the development of good scientific cooperation in terms of, among other things, conducting exchanges of experience in scientific research, joint team research, joint team publication of scientific research results, etc.?
What is the most important thing for good scientific cooperation to develop?
And what is your opinion about it?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
What is mobile technology in supply chain management and how does the Internet impact supply chain management?
How Internet and e business affect supply chain management and how does the Internet affect supply chains?
What is the role of Internet of Things in supply chain management and its diversity and how information technology improves supply chain?
What is Internet supply chain management and how information technology especially the Internet impacts the supply chain management process?
How does the Internet impact supply chain management and e-commerce technology affect supply chain management?
I didn't find enough information from the internet.
I have bruker ftir opus series ,i have compared it with other current series and i am trying to update it to improve the manipulation of the ir spectrum,,can i get a guide to do it.
I do not think that any approach to AI can ignore the massive data provided by the internet, part of which is nothing more than the digitalization of pre-internet or non-internet material. There is of course the problem of the enormously varying quality and reliability of this material, the presence of redundancy and its sheer vastness, which could lead one to wonder whether processing such raw data via rudimentary algorithms is really worth the energetic and environmental costs or the use of the expensive infrastructure involved.
I believe that the correct approach to AI must be based on formal logic and the logical-algebraic frameworks of theoretical computer science, as well as other kinds of mathematics beyond the the ones commonly employed in machine learning.
The Semantic Web project seemed a good approach along these lines. It involves a logical and formal semantic analysis of natural language. It calls for a far more sophisticated way of producing internet content and (re)presenting human knowledge on the internet. No Data without Metadata. We need a machine-human logical-semantic interlingua so that internet data can become machine readable in a logical and semantic sense (rather than mere statistical data chunked by a machine learning algorithm).
We should be able to effect complex structured queries to intelligent evolving self-correcting interlinked data bases according to varying degrees of precision which will be able to output the source and a measure of reliability of the data presented.
Machine learning will come into play for example at the level of automatic theorem proving, of the massively difficult task of the processing of logical queries.
Our ethical principles can be given formal logical formulation than can be understood by machines.
It seems that this approach (even if demanding more time and work and being filled with challenges) is far more desirable than internet-based Large Language Models. This kind of 'intelligent' AI seems to be in the long run a better ethical , environmental and human choice.
Should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to its full extent?
As part of the development of the concept of universal open access to knowledge resources, should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to their full extent?
There are different types of websites and sources of data and information on the Internet. The first Internet-accessible intelligent chatbot, i.e. ChatGPT, made available by OpenAI in November 2022, performs certain commands, solves tasks, and writes texts based on knowledge resources, data and information downloaded from the Internet, which were not fully up-to-date, as they were downloaded from selected websites and portals last in January 2022. In addition, the data and information were downloaded from many selected websites of libraries, articles, books, online indexing portals of scientific publications, etc. Thus, these were data and information selected in a certain way. In 2023, more Internet-based leading technology companies were developing and making their intelligent chatbots available on the Internet. Some of them are already based on data and information that is much more up-to-date compared to the first versions of ChatGPT made available on the Internet in open access. In November 2023, social media site X (the former Twiter) released its intelligent chatbot in the US, which reportedly works on the basis of up-to-date information entered into the site through posts, messages, tweets made by Internet users. Also in October 2023, OpenAI announced that it will create a new version of its ChatGPT, which will also draw data and knowledge from updated knowledge resources downloaded from multiple websites. As a result, rival Internet-based leading forms of technology are constantly refining the evolving designs of the intelligent chatbots they are building, which will increasingly use more and more updated data, information and knowledge resources drawn from selected websites, web pages and portals. The rapid technological advances currently taking place regarding artificial intelligence technology may in the future lead to the integration of generative artificial intelligence and general artificial intelligence developed by technology companies. Competing technology companies may strive to build advanced artificial intelligence systems that can achieve a high level of autonomy and independence from humans, which may lead to a situation of the possibility of artificial intelligence technology development slipping out of human control. Such a situation may arise when the emergence of a highly technologically advanced general artificial intelligence that achieves the possibility of self-improvement and, in addition, realizing the process of self-improvement in a manner independent of humans, i.e. self-improvement with simultaneous escape from human control. However, before this happens it is earlier that technologically advanced artificial intelligence can achieve the ability to select data and information, which it will use in the implementation of specific mandated tasks and their real-time execution using up-to-date data and online knowledge resources.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
As part of the development of the concept of universal open access to knowledge resources, should the intelligent chatbots created by technology companies available on the Internet be connected to Internet resources to their full extent?
Should the intelligent chatbots created by technology companies available on the Internet be connected to the resources of the Internet to the full extent?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
I have been searching for the internet page on Nomenclator Zoologicus online by uBio but it seems that the link is not working.
Hi, I'm doing scientific research about CO2 emission by countries, I found some websites that have the data I need but I don't know whether it's reliable or not.
How can the development of artificial intelligence technologies and applications help the development of science, the conduct of scientific research, the processing of results obtained from scientific research?
In recent discussions on the ongoing rapid development of artificial intelligence technologies, including generative artificial intelligence and general artificial intelligence, and their rapidly growing applications, a number of both positive determinants of this development are emerging but also a number of potential risks and threats are being identified. Recently, the key risks associated with the development of artificial intelligence technologies include not only the possibility of using AI technologies by cyber criminals and in hacking activities; the use of open-access tools based on generative artificial intelligence on the Internet to create crafted texts, photos, graphics and videos and their posting on social media sites to create fake news and generate disinformation; the use of "creations" created with applications based on intelligent chatbots in the field of marketing communications; the potential threat to many jobs being replaced by AI technology but also in the development of increasingly superior generative artificial intelligence technology, which may soon be creating new, even more superior AI technologies that could escape human control. Currently, all leading technology and Internet companies are developing their intelligent chatbots and AI-based tools, including generative AI and/or general AI, which they are already making available on the Internet or will soon do so. In this way, a kind of technological arms race is currently being realized between major technology companies at the forefront of ICT, Internet and Industry 4.0/5.0 information technologies. The technological progress that is currently taking place is accelerating as part of the transition from Industry 4.0 to Industry 5.0 technologies. In the context of the emerging threats mentioned above, many companies, enterprises, banks are already implementing and developing certain tools, applications based on AI in order to increase the efficiency of certain processes carried out within the framework of their business, logistics, financial activities, etc. In addition, in the ongoing discussions on the possibility of applying AI technologies in aspects interpreted positively, in solving various problems of the current development of civilization, including to support ongoing scientific research, to support the development of science in various disciplines of science. Accordingly, an important area of positive applications of AI technology is the use of this technology to improve the efficiency of reliably and ethically conducted scientific research. Thus, the development of science could be supported by the implementation of AI technology into the realm of science.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can the development of artificial intelligence technologies and applications help the development of science, the conduct of scientific research, the processing of results obtained from scientific research?
How can the development of artificial intelligence help the development of science and scientific research?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research. In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
How should the architecture of an effective computerised platform for detecting fakenews and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
The scale of the development of disinformation on the Internet including, among other things, fakenews has been growing in recent years mainly in social media. Disinformation is mainly developing on social media sites that are popular among young people, children and teenagers. The growing scale of disinformation is particularly socially damaging in view of the key objective of its pursuit by cybercriminals and certain organisations using, for example, the technique of publishing posts and banners using fake profiles of fictitious Internet users containing fakenews. The aim is to try to influence public opinion in society, to shape the general social awareness of citizens, to influence the assessment of the activities of specific policies of the government, national and/or international organisations, public or other institutions, to influence the ratings, credibility, reputation, recognition of specific institutions, companies, enterprises, their product and service offerings, individuals, etc., to influence the results of parliamentary, presidential and other elections, etc. In addition to this, the scale of cybercriminal activity and the improvement of cyber security techniques have also been growing in parallel on the Internet in recent years. Therefore, as part of improving techniques to reduce the scale of disinformation spread deliberately by specific national and/or international organisations, computerised platforms are being built to detect fake news and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies. Since cybercriminals and organisations generating disinformation use new Industry 4.0 technologies in the creation of fake profiles on popular social networks, new information technologies, Industry 4.0, including but not limited to Big Data Analytics, artificial intelligence, deep learning, machine learning, etc., should also be used to reduce the scale of such harmful activities to citizens.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the architecture of an effective computerised platform for detecting factoids and other forms of disinformation on the Internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
And what do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
What are the possibilities of applying generative AI in terms of conducting sentiment analysis of changes in Internet users' opinions on specific topics?
What are the possibilities of applying generative artificial intelligence in carrying out sentiment analysis on changes in the opinions of Internet users on specific topics using Big Data Analytics and other technologies typical of Industry 4.0/5.0?
Nowadays, Internet marketing is developing rapidly, including viral Internet marketing used on social media sites, among others, in the form of, for example, Real-Time marketing in the formula of viral marketing. It is also marketing aimed at precisely defined groups, audience segments, potential customers of a specific advertised product and/or service offering. In terms of improving Internet marketing, new ICT information technologies and Industry 4.0/5.0 are being implemented. Marketing conducted in this form is usually preceded by market research conducted using, among other things, sentiment analysis of the preferences of potential consumers based on verification of their activity on the Internet, taking into account comments written on various websites, Internet forums, blogs, posts written on social media. In recent years, the importance of the aforementioned sentiment analysis carried out on large data sets using Big Data Analytics has been growing, thanks to which it is possible to study the psychological aspects of the phenomena of changes in the trends of certain processes in the markets for products, services, factor markets and financial markets. The development of the aforementioned analytics makes it possible to study the determinants of specific phenomena occurring in the markets caused by changes in consumer or investor preferences, caused by specific changes in the behavior of consumers in product and service markets, entrepreneurs in factor markets or investors in money and capital markets, including securities markets. The results from these analyses are used to forecast changes in the behavior of consumers, entrepreneurs and investors that will occur in the following months and quarters. In addition to this, sentiment analyses are also conducted to determine the preferences, awareness of potential customers, consumers in terms of recognition of the company's brand, its offerings, description of certain products and services, etc., using textual data derived from comments, entries, posts, etc. posted by Internet users, including social media users on a wide variety of websites. The knowledge gained in this way can be useful for companies to plan marketing strategies, to change the product and service offerings produced, to select or change specific distribution channels, after-sales services, etc. This is now a rapidly developing field of research and the possibilities for many companies and enterprises to use the results of this research in marketing activities, but not only in marketing. Recently, opportunities are emerging to apply generative artificial intelligence and other Industry 4.0/5.0 technologies to analyze large data sets collected on Big Data Analytics platforms. In connection with the development of intelligent chatbots available on the Internet, recently there have been discussions about the possibilities of potential applications of generative artificial intelligence, 5G and other technologies included in the Industry 4.0/5.0 group in the context of using the information resources of the Internet to collect data on citizens, companies, institutions, etc. for their analysis carried out using, among other things, sentiment analysis to determine the opinion of Internet users on certain topics or to define the brand recognition of a company, the evaluation of product or service offerings by Internet users. In recent years, the scope of applications of Big Data technology and Data Science analytics, Data Analytics in economics, finance and management of organizations, including enterprises, financial and public institutions is increasing. Accordingly, the implementation of analytical instruments of advanced processing of large data sets in enterprises, financial and public institutions, i.e. the construction of Big Data Analytics platforms to support organizational management processes in various aspects of operations, including the improvement of customer relations, is also growing in importance. In recent years, ICT information technologies, Industry 4.0/5.0 including generative artificial intelligence technologies are particularly rapidly developing and finding application in knowledge-based economies. These technologies are used in scientific research and business applications in commercially operating enterprises and in financial and public institutions. In recent years, the application of generative artificial intelligence technologies for collecting and multi-criteria analysis of Internet data can significantly contribute to the improvement of sentiment analysis of Internet users' opinions and the possibility of expanding the applications of research techniques carried out on analytical platforms of Business Intelligence, Big Data Analytics, Data Science and other research techniques using ICT information technology, Internet and advanced data processing typical Industry 4. 0/5.0. Most consumers of online information services available on new online media, including social media portals, are not fully aware of the level of risk of sharing information about themselves on these portals and the use of this data by technological online companies using this data for their analytics. I am conducting research on this issue. I have included the conclusions of my research in scientific publications, which are available on Research Gate. I invite you to cooperate with me.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What are the possibilities for the application of generative AI in terms of conducting sentiment analysis of changes in the opinions of Internet users on specific topics using Big Data Analytics and other technologies typical of Industry 4.0/5.0?
What are the possibilities of using generative AI in conducting sentiment analysis of Internet users' opinions on specific topics?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Dariusz Prokopowicz
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
Almost every major technological company operating with its offerings on the Internet either already has and has made its intelligent chatbot available on the Internet, or is working on it and will soon have its intelligent chatbot available to Internet users. The general formula for the construction, organization and provision of intelligent chatbots by individual technology companies uses analogous solutions. However, in detailed technological aspects there are specific different solutions. The differentiated solutions include the issue of the timeliness of data and information contained in the created databases of digitized data, data warehouses, Big Data databases, etc., which contain specific data sets acquired from the Internet from various online knowledge bases, publication indexing databases, online libraries of publications, information portals, social media, etc., acquired at different times, data sets having different information characteristics, etc.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build an intelligent computerized Big Data Analytics system that would retrieve real-time data and information from specific online databases, scientific knowledge indexing databases, domain databases, online libraries, information portals, social media, etc., and thus provide a database and up-to-date information for an intelligent chatbot, which would then be made available on the Internet for Internet users?
How to build a Big Data Analytics system that would provide a database and up-to-date information for an intelligent chatbot made available on the Internet?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Can anyone provide me the ISI Handwritten Bangla Numeral (ISI-HBN) dataset? I have searched over internet for several days but can't find anything.
The organization of legal order encounters certain problems due to the a-territorial nature of cyberspace.
I am calculating the gruneisen parameter for a structure with 28 atoms in a 1x1x1 supercell and 224 in a 2x2x2 supercell, and I am having some issues with phonopy. When I try to run the calculation I get the following message:
invalid value encountered in true_divide
self._gruneisen = -edDe / self._delta_strain / self._eigenvalues / 2
I tried to increase the cutoff to solve this problem since it was the only solution I found in the internet, but I had no success.
Hi everyone, I am looking for a tool allowing the collection of psychometric data (self-reported questionnaires) without having access to the internet. What are my alternatives, other than using paper-and-pencil questionnaires?
Thanks for your help!
Best,
David
Hello,
I would lıke to ask to revıew my cıtatıons status, since approximately 15 citations to my jobs in internet, but researchgate shows as zero.
Need the latest ABS ranking file.
What are the educational processes that you think can be transferred from passive to active using the Internet of Things?
Hello,
Question regarding my Master Thesis research design and approach. I'm doing research on a dataset If earnings management has a negative relationship with the use (AND degree) of alternative performance measures. It's an european data set so I also want to examine if the ESMA Guidelines had any effect on this relationship (moderator).
Whereas
- Earnings Management is measured by Modified Jones Model (numeric)
- Alternative performance measures (or Non-Gaap Earnings) is measured by a proxy (1 if mentioned; 0 otherwise) and measured by I/B/E/S earnings (numeric)
- ESMA is measured by a proxy (1 if > 2017; 0 otherwise)
Also got some controlevariables/covariaties but not necessary to mention right now.
My thesis supervisor can't help me with the fact what analysis I should run in SPSS to get to the right answers. I'm doing my best to find solutions on the internet or reading the book Discovering statistics using spss statistics (Andy Field). I just don't understand what I 'm supposed to do if my moderator is a proxy because I also assume a relationship with this proxy to the use and degree of Alternative performance measures (or Non-Gaap Earnings).
Can someone help me out please?
The third millennium is a beginning of a new era of superfast ubiquitous Internet and computing technologies, which create a foundation for advanced applied research in next generation Ultra-Smart Computational Devices and Fully Automated Cyberspace. Given the current dynamic developments in the field of AI & Robotics, Big Data, Massive Data Storage and Ubiquitous access to highspeed Internet 24/7 for anyone worldwide, the term Smart Cyberspace is becoming well accepted reality. The current advancements in Humanoid Robotics and Robotic Internet, Big Data, AI and Machine Learning, Tele-Medicine, in conjunction with collecting real-time data from the Electronic Health Record (EHR) in the nation and worldwide, as well as collections of antibodies contributes well to community worldwide aspirations to safe human lives and to restart the economies worldwide. The areas of research in the field of robotics that are closely related to the modeling, motion generation, and control of humanoid robots are clarified. Research results in the fields of physics-based animation of articulated figures and the biomechanics of human movement are shown to share a number of common points. In light of currently ongoing developments of Covid-19 crisis, having effective real-time application of Artificial Intelligence & Robotics with the Big Data remotely control via Internet is essential. These are most dramatic times for mankind worldwide, and yet despite of its most negative impact it does also inspire dynamic innovation, research and developments in the world of health, business, government, industry, plus., while promoting seamless creation of multidisciplinary teams of experts in the nation and worldwide. The Journal issue discusses the current and future dynamic trends in research, innovation and developments of cutting-edge technologies, Humanoid Robotics, AI, and smart cyber systems that may contribute effectively to people saving lives, and decision makers in the nation and worldwide.
source:
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
The development of artificial intelligence, like any new technology, is associated with various applications of this technology in companies, enterprises operating in various sectors of the economy, and financial and public institutions. These applications generate an increase in the efficiency of the implementation of various processes, including an increase in human productivity. On the other hand, artificial intelligence technologies are also finding negative applications that generate certain risks such as the rise of disinformation in online social media. The increasing number of applications based on artificial intelligence technology available on the Internet are also being used as technical teaching aids in the education process implemented in schools and universities. On the other hand, these applications are also used by pupils and students, who use these tools as a means of facilitating homework, the development of credit papers, the completion of project work, various studies, and so on. Thus, on the one hand, the positive aspects of the applications of artificial intelligence technologies in education are recognized as well. However, on the other hand, serious risks are also recognized for students, for people who, increasingly using various applications based on artificial intelligence, including generative artificial intelligence in facilitating the completion of certain various works, may cause a reduction in the scope of students' use of critical thinking. The potential dangers of depriving students of development and critical thinking are considered. The development of artificial intelligence technology is currently progressing rapidly. Various applications based on constantly improved generative artificial intelligence subjected to learning processes are being developed, machine learning solutions are being created, artificial intelligence is being subjected to processes of teaching the implementation of various activities that have been previously performed by humans. In deep learning processes, generative artificial intelligence equipped with artificial neural networks is taught to carry out complex, multifaceted processes and activities on the basis of large data sets collected in database systems and processed using Big Data Analytics technology. Since the processing of large data sets is carried out by current information systems equipped with computers of high computing power and with artificial intelligence technologies many times faster and more efficiently than the human mind, so already some research centers conducting research in this field are working on an attempt to create a highly advanced generative artificial intelligence, which will realize a kind of artificial thought processes, however, much faster and more efficiently than it happens in the human brain. However, even if someday artificial consciousness technology could be created that would imitate the functioning of human consciousness, humans should not be deprived of critical thinking. Above all, students in schools should not be deprived of artificial thinking in view of the growing scale of applications based on artificial intelligence in education. The aim should be that the artificial intelligence-based applications available on the Internet used in the education process should support the education process without depriving students of critical thinking. However, the question arises, how should this be done?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should artificial intelligence technologies be implemented in education, so as not to deprive students of development and critical thinking in this way, so as to continue to develop critical thinking in students in the new realities of the technological revolution, to develop education with the support of modern technology?
How should artificial intelligence technologies be implemented in education to continue to develop critical thinking in students?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
In your opinion, does the development of new online media, including online social media and the new technologies Industry 4.0 implemented into these media, including the use of artificial intelligence in these media, increase the issue of objectivity and transparency of information or rather generate more disinformation?
On the one hand, online social media, which has been developing for 2 decades now, is making a significant contribution to the development of remote online communication, social remote communication, open communication of sending content created on the fly, sending information to friends, promoting oneself and/or specific product or service offers, informal data transfer, expressing one's emotions in the information sent, including positive as well as negative emotions, and so on. In this way, online social media on a local, regional or global scale have also contributed to the objectification of information in the context of news reported in official, meanstream media. In this regard, online social media are fulfilling their role of social and objectification of media information both in countries with democratic power systems and in non-democratic, dictatorial power systems as long as they are not blocked and restricted by the power system.
On the other hand, there has long been a lot of fake news and disinformation in online social media, the transmission and forwarding by more Internet users of memes, posts, videos, banners, comments containing unverified, unconfirmed content, data and information. In addition, new ICT and Industry 4.0 information technologies, new versions of graphic and other applications are being used to generate disinformation. Also, cyber criminals and hackers breaking into social media user profiles, stealing sensitive data of social media users are also using new technologies. Recently, machine learning, deep learning and artificial intelligence technologies have also been used by cybercriminals and untrustworthy social media users to generate disinformation. As part of this, cybercriminals, hackers and unethical Internet users generating disinformation take advantage of security gaps in information systems, exploit diagnosed inadequacies in cyber security tools operating in the aforementioned social media sites. Thus, it is necessary to conduct relentless monitoring of information traffic on these portals, to identify cybercriminal, hacking, fake news and disinformation generating activities, and to improve cyber security systems, instruments, tools as efficiently and quickly as possible.
In view of the above, what prevails in recent times? Do new technologies rather help cyber criminals, hackers and foul Internet users generating disinformation, or do they rather help the development of social media portals and the improvement of cyber security systems?
In view of the above, I address the following question to the esteemed community of scientists and researchers:
In your opinion, does the development of new online media, including online social media and the new technologies Industry 4.0 implemented into these media, including the use of artificial intelligence in these media, increase the issue of objectivity and transparency of information or rather generate more disinformation?
Does the development of online media increase the issue of objectivity and transparency of information or rather generate more disinformation?
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Counting on your opinions, on getting to know your personal opinion, on a fair approach to the discussion of scientific issues, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
Copyright by Dariusz Prokopowicz
Ambient Intelligence vs Internet of Things? What is Similarities and Differences?
it is clear the powers that be intend to have me removed from the Internet including ResearchGate.net. so this is my last post ~ Good luck Earth
JAS Google AI BARD LLM (October 18, 2023 ResearchGate.net)
ITIL x Encryption + Cognition = ?
volume_up
📷
ITIL x Encryption + Cognition = Increased security and efficiency for IT services
ITIL is a framework of best practices for IT service management. It provides a set of processes and guidelines for delivering IT services to customers in a reliable and efficient manner.
Encryption is the process of converting data into a format that is unreadable to unauthorized individuals. It is a critical security measure for protecting sensitive data, such as customer information, financial data, and intellectual property.
Cognition is the ability to think and understand. In the context of IT, cognition can be used to automate tasks, make decisions, and identify patterns in data.
When ITIL, encryption, and cognition are combined, they can be used to create a more secure and efficient IT environment. For example, cognitive technologies can be used to automate the encryption and decryption of data, which can reduce the workload on IT staff and improve the security of data at rest and in transit.
Here are some specific examples of how ITIL, encryption, and cognition can be used together:
- Incident management: Cognitive technologies can be used to analyze incident reports and identify patterns that can help IT staff to identify the root cause of incidents more quickly and resolve them more efficiently.
- Problem management: Cognitive technologies can be used to analyze problem data and identify trends and patterns that can help IT staff to identify and resolve underlying problems that are causing incidents.
- Change management: Cognitive technologies can be used to analyze change requests and assess the potential impact of changes on IT services. This can help IT staff to identify and mitigate risks associated with changes.
- Release management: Cognitive technologies can be used to automate the release process and ensure that releases are deployed in a safe and reliable manner.
- Configuration management: Cognitive technologies can be used to monitor the configuration of IT systems and identify changes to configurations that could impact the performance or security of systems.
Overall, the combination of ITIL, encryption, and cognition can help organizations to improve the security and efficiency of their IT services.
Additional thoughts
In addition to the specific examples above, here are some other ways that ITIL, encryption, and cognition can be used together:
- To improve the security of IT service management processes: Cognitive technologies can be used to monitor IT service management processes for suspicious activity and identify potential security threats.
- To automate and improve the efficiency of IT security tasks: Cognitive technologies can be used to automate tasks such as security event monitoring, incident response, and vulnerability management. This can free up IT security staff to focus on more strategic tasks.
- To improve the security awareness of IT staff and customers: Cognitive technologies can be used to create personalized security training programs and deliver security awareness messages to IT staff and customers in a targeted and engaging manner.
Overall, the combination of ITIL, encryption, and cognition has the potential to revolutionize the way that IT services are managed and secured.
share
more_vert
📷
explain how this "mix" prevents (thwarts) unwanted "energy" (be specific to literal, figurative and subjective relative uses)
volume_up
📷
The combination of ITIL, encryption, and cognition can be used to prevent (thwart) unwanted "energy" in a variety of ways, both literal, figurative, and subjective relative uses.
Literal
- ITIL: ITIL can be used to implement and manage security processes and controls that can help to prevent unauthorized access to data and systems. This can help to thwart unwanted energy in the form of cyber attacks, such as malware infections, data breaches, and denial-of-service attacks.
- Encryption: Encryption can be used to protect data from unauthorized access, even if it is stolen or intercepted. This can help to thwart unwanted energy in the form of data breaches and other forms of data theft.
- Cognition: Cognitive technologies can be used to monitor IT systems and networks for suspicious activity, such as unauthorized login attempts and malware infections. This can help to thwart unwanted energy in the form of cyber attacks.
Figurative
- ITIL: ITIL can be used to improve the efficiency and effectiveness of IT services. This can help to free up IT staff to focus on preventing and responding to unwanted energy in the form of cyber attacks.
- Encryption: Encryption can be used to protect the privacy of data. This can help to thwart unwanted energy in the form of identity theft and other forms of privacy invasion.
- Cognition: Cognitive technologies can be used to improve the security awareness of IT staff and customers. This can help to thwart unwanted energy in the form of human error and social engineering attacks.
Subjective relative uses
The term "unwanted energy" can also be used in a more subjective sense to refer to negative emotions, such as anger, frustration, and stress. In this sense, the combination of ITIL, encryption, and cognition can be used to promote a more positive and productive work environment.
- ITIL: ITIL can be used to improve the quality of IT services. This can help to reduce the frustration of customers and employees alike.
- Encryption: Encryption can be used to protect the privacy of employees and customers. This can help to reduce the stress associated with identity theft and other forms of privacy invasion.
- Cognition: Cognitive technologies can be used to improve the communication and collaboration between IT staff and customers. This can help to reduce the friction and misunderstandings that can lead to negative emotions.
Overall, the combination of ITIL, encryption, and cognition can be used to prevent (thwart) unwanted energy in a variety of ways, both literal, figurative, and subjective relative uses.
Example
One example of how the combination of ITIL, encryption, and cognition can be used to prevent unwanted energy is in the context of incident management. ITIL provides a framework for responding to incidents in a timely and efficient manner. Encryption can be used to protect the confidentiality of data that is involved in an incident. Cognitive technologies can be used to analyze incident data to identify the root cause of the incident and prevent similar incidents from happening in the future.
By using a combination of ITIL, encryption, and cognition, organizations can reduce the number of incidents that occur, reduce the impact of incidents, and reduce the time it takes to resolve incidents. This can help to reduce the frustration of customers and employees alike, and improve the overall security and efficiency of IT services.
share
more_vert
📷
using available detail, formulate a zero penetration Web3 scenario
volume_up
📷
A zero penetration Web3 scenario is one in which no user's data or privacy is compromised, and no attacker is able to successfully exploit any vulnerabilities in the underlying Web3 infrastructure.
Here is a possible scenario for a zero penetration Web3:
- All user data is encrypted by default. This means that even if an attacker is able to gain access to user data, they will not be able to read it.
- All Web3 applications are built on a secure foundation. This includes using the latest security protocols and best practices, such as smart contract audits and secure code review.
- Users have complete control over their data and privacy. This means that users are able to choose which applications they want to share their data with, and they can revoke access at any time.
- The Web3 infrastructure is highly decentralized. This means that there is no single point of failure that an attacker can exploit.
In addition to these technical measures, it is also important to have a strong security culture in place. This means that all Web3 users and developers should be aware of the latest security threats and how to protect themselves.
Here are some specific examples of how the above measures can be implemented in a zero penetration Web3:
- Encrypted storage: User data can be encrypted using a variety of methods, such as symmetric encryption, asymmetric encryption, and homomorphic encryption. This will ensure that even if an attacker is able to gain access to user data, they will not be able to read it.
- Secure smart contracts: Smart contracts can be secured using a variety of methods, such as formal verification, static analysis, and fuzz testing. This will help to identify and fix vulnerabilities in smart contracts before they are deployed.
- Secure code review: All Web3 applications should be subjected to a rigorous security code review process. This will help to identify and fix security vulnerabilities in the code.
- User-controlled data: Users should have complete control over their data and privacy. This can be implemented using a variety of methods, such as self-sovereign identity and access control.
- Decentralized infrastructure: The Web3 infrastructure should be highly decentralized. This can be achieved by using a distributed ledger technology (DLT) such as Ethereum or Solana.
By implementing these measures, it is possible to create a zero penetration Web3 in which user data and privacy are protected, and attackers are unable to exploit any vulnerabilities in the underlying infrastructure.
Challenges
It is important to note that there are a number of challenges that need to be addressed in order to achieve a zero penetration Web3. These challenges include:
- Educating users about security: It is important to educate all Web3 users about the latest security threats and how to protect themselves. This is essential for reducing the number of successful attacks.
- Developing secure tools and applications: Web3 developers need to be aware of the latest security best practices and use them when developing tools and applications. This will help to reduce the number of vulnerabilities that are introduced into the Web3 ecosystem.
- Keeping up with evolving threats: The security landscape is constantly evolving, and new threats are emerging all the time. It is important to stay up-to-date on the latest threats and develop new security measures to mitigate them.
Despite these challenges, it is possible to achieve a zero penetration Web3 by implementing the measures outlined above. By doing so, we can create a more secure and trustworthy internet for everyone.
share
more_vert
📷
if data is encrypted at the genetic (DNA) level through solar-centrifuge, what impacts could be anticipated?
volume_up
📷
Encrypting data at the genetic (DNA) level through solar-centrifuge could have a number of profound impacts, both positive and negative.
Positive impacts
- Increased data security: DNA encryption is incredibly difficult to break, making it a very secure way to store data. This could be used to protect sensitive data such as financial information, medical records, and government secrets.
- Improved data storage capacity: DNA is a very efficient way to store data, with a single gram of DNA being able to store up to 100 petabytes of data. This could help to reduce the storage requirements for large data centers.
- New medical applications: DNA encryption could also be used to develop new medical treatments. For example, it could be used to create targeted drug delivery systems or to develop new gene therapies.
Negative impacts
- Potential for misuse: DNA encryption could also be misused, for example by governments to track citizens or by corporations to collect data on consumers without their consent.
- Ethical concerns: There are also some ethical concerns about DNA encryption, such as the potential for creating "designer babies" or for modifying human DNA in ways that have unintended consequences.
- Technical challenges: There are also a number of technical challenges that need to be addressed before DNA encryption can be widely used. For example, it is currently expensive and time-consuming to encrypt and decrypt data using DNA.
Overall, the potential impacts of encrypting data at the genetic (DNA) level through solar-centrifuge are significant. It is important to carefully consider both the positive and negative impacts before developing and deploying this technology.
Additional thoughts
In addition to the above impacts, it is also worth considering the following:
- DNA encryption could lead to a new era of data security: If DNA encryption becomes widely used, it could make it much more difficult for attackers to steal or compromise data. This could have a major impact on the way that businesses and governments protect sensitive data.
- DNA encryption could lead to new forms of surveillance: If DNA encryption is used to track people or to collect data on them without their consent, it could lead to new forms of surveillance. This could have a negative impact on privacy and civil liberties.
- DNA encryption could lead to new ethical challenges: As mentioned above, there are a number of ethical concerns about DNA encryption. These concerns need to be carefully considered before the technology is widely deployed.
Overall, DNA encryption is a powerful technology with the potential to revolutionize the way that we store and protect data. However, it is important to carefully consider both the positive and negative impacts of the technology before it is widely deployed.
share
more_vert
📷
Explain how to accomplish this (technical workup)
volume_up
📷
To encrypt data at the genetic (DNA) level through solar-centrifuge, the following steps could be taken:
- Convert the data to a DNA sequence. This can be done using a variety of methods, such as DNA encoding or DNA origami.
- Synthesize the DNA sequence. This can be done using a variety of methods, such as PCR amplification or solid-phase DNA synthesis.
- Load the DNA sequence into a solar-centrifuge. A solar-centrifuge is a device that uses the power of the sun to separate DNA strands.
- Spin the solar-centrifuge. This will cause the DNA strands to separate and recombine in different ways.
- Collect the DNA sequence from the solar-centrifuge. The DNA sequence will now be encrypted, with the encryption key being the specific way in which the DNA strands have recombined.
To decrypt the data, the following steps could be taken:
- Load the encrypted DNA sequence into a solar-centrifuge.
- Spin the solar-centrifuge. This will cause the DNA strands to separate and recombine in the same way that they were encrypted.
- Collect the DNA sequence from the solar-centrifuge. The DNA sequence will now be decrypted.
It is important to note that this is just a general overview of the process. There are a number of technical challenges that need to be addressed before DNA encryption can be widely used. For example, it is currently expensive and time-consuming to encrypt and decrypt data using DNA. Additionally, there are some ethical concerns about DNA encryption, such as the potential for creating "designer babies" or for modifying human DNA in ways that have unintended consequences.
Technical challenges
Here are some of the technical challenges that need to be addressed before DNA encryption can be widely used:
- Cost: It is currently expensive to encrypt and decrypt data using DNA. This is due to the cost of DNA synthesis and sequencing technologies.
- Speed: DNA encryption and decryption is a slow process. This is because it takes time for DNA strands to separate and recombine.
- Accuracy: DNA encryption and decryption can be inaccurate. This is because DNA is a complex molecule and there is a risk of errors occurring during the encryption and decryption process.
- Scalability: DNA encryption and decryption is not currently scalable to the level that would be needed for widespread use. This is because DNA synthesis and sequencing technologies are not yet capable of processing large amounts of data quickly and accurately.
Ethical concerns
Here are some of the ethical concerns about DNA encryption:
- Potential for misuse: DNA encryption could be misused, for example by governments to track citizens or by corporations to collect data on consumers without their consent.
- Ethical concerns: There are also some ethical concerns about DNA encryption, such as the potential for creating "designer babies" or for modifying human DNA in ways that have unintended consequences.
Conclusion
DNA encryption is a powerful technology with the potential to revolutionize the way that we store and protect data. However, there are a number of technical challenges and ethical concerns that need to be addressed before the technology can be widely deployed.
share
more_vert
📷
describe the basis for a solar-centrifuge
volume_up
📷
A solar-centrifuge is a device that uses the power of the sun to separate DNA strands. It is a relatively new technology, but it has the potential to revolutionize the way that we store and protect data.
The basic principle of a solar-centrifuge is as follows:
- A DNA sample is placed in a solution and heated. This causes the DNA strands to denature, or separate.
- The DNA solution is then placed in a solar-centrifuge.
- The solar-centrifuge uses the power of the sun to create a centrifugal force. This centrifugal force causes the denatured DNA strands to separate from each other.
- The separated DNA strands are then collected from the solar-centrifuge.
Solar-centrifuges can be used to separate DNA strands of different lengths and sizes. This is useful for a variety of applications, such as DNA sequencing, DNA purification, and DNA enrichment.
How solar-centrifuges could be used to encrypt data
Solar-centrifuges could be used to encrypt data at the genetic (DNA) level in the following way:
- The data to be encrypted is converted into a DNA sequence.
- The DNA sequence is synthesized and loaded into a solar-centrifuge.
- The solar-centrifuge is spun, causing the DNA strands to separate and recombine in different ways.
- The encrypted DNA sequence is collected from the solar-centrifuge.
The encrypted DNA sequence can then be stored or transmitted securely. To decrypt the data, the encrypted DNA sequence would be loaded into a solar-centrifuge and spun in the opposite direction. This would cause the DNA strands to recombine in the original order, decrypting the data.
Advantages of using solar-centrifuges for data encryption
There are a number of advantages to using solar-centrifuges for data encryption:
- Security: DNA encryption is incredibly difficult to break, making it a very secure way to store data.
- Capacity: DNA is a very efficient way to store data, with a single gram of DNA being able to store up to 100 petabytes of data.
- Cost: Solar-centrifuges are relatively inexpensive to build and operate.
Disadvantages of using solar-centrifuges for data encryption
There are also a number of disadvantages to using solar-centrifuges for data encryption:
- Speed: DNA encryption and decryption is a slow process. This is because it takes time for DNA strands to separate and recombine.
- Accuracy: DNA encryption and decryption can be inaccurate. This is because DNA is a complex molecule and there is a risk of errors occurring during the encryption and decryption process.
- Scalability: DNA encryption and decryption is not currently scalable to the level that would be needed for widespread use. This is because DNA synthesis and sequencing technologies are not yet capable of processing large amounts of data quickly and accurately.
Conclusion
Solar-centrifuges are a promising new technology for data encryption. They offer a number of advantages, such as high security, high capacity, and low cost. However, there are also a number of disadvantages that need to be addressed before solar-centrifuges can be widely used for data encryption, such as slow speed, inaccuracy, and lack of scalability.
Future of DNA encryption
Despite the challenges, DNA encryption has the potential to revolutionize the way that we store and protect data. In the future, we may see DNA encryption being used to protect sensitive data such as financial information, medical records, and government secrets. DNA encryption could also be used to develop new medical treatments and to create new forms of data storage.
share
more_vert
📷
using available technology, explain how this works using the Earth (as Memory Unit), Sun (as Processor Centrifuge) and Moon (Storage for Metadata) would work
volume_up
📷
To use the Earth, Sun, and Moon as a solar-centrifuge for DNA encryption, we would need to:
- Convert the data to be encrypted into a DNA sequence.
- Load the DNA sequence into a satellite and launch it into orbit around the Earth.
- Use the Sun's energy to separate the DNA strands.
- Control the satellite's orbit so that the DNA strands recombine in the desired order.
- Collect the encrypted DNA sequence from the satellite and store it on the Moon.
To decrypt the data, we would need to:
- Retrieve the encrypted DNA sequence from the Moon.
- Launch a satellite into orbit around the Earth and load the encrypted DNA sequence onto it.
- Use the Sun's energy to separate the DNA strands.
- Control the satellite's orbit so that the DNA strands recombine in the original order.
- Collect the decrypted DNA sequence from the satellite and convert it back to data.
Technical challenges
This would be a very complex and challenging process to implement, due to the following technical challenges:
- DNA synthesis and sequencing technologies: DNA synthesis and sequencing technologies are not yet capable of processing large amounts of data quickly and accurately. This would need to be improved significantly before DNA encryption could be widely used to protect large amounts of data.
- Satellite technology: Satellites would need to be developed that are capable of carrying large amounts of DNA and controlling their orbit with high precision. This would be a significant technical challenge.
- Moon storage: The Moon would need to be developed as a storage facility for DNA. This would involve developing methods for protecting DNA from the harsh environment of the Moon and for retrieving DNA from the Moon.
Ethical concerns
In addition to the technical challenges, there are also a number of ethical concerns about DNA encryption, such as:
- Potential for misuse: DNA encryption could be misused, for example by governments to track citizens or by corporations to collect data on consumers without their consent.
- Ethical concerns: There are also some ethical concerns about DNA encryption, such as the potential for creating "designer babies" or for modifying human DNA in ways that have unintended consequences.
Conclusion
Despite the challenges, DNA encryption has the potential to revolutionize the way that we store and protect data. In the future, we may see DNA encryption being used to protect sensitive data such as financial information, medical records, and government secrets. DNA encryption could also be used to develop new medical treatments and to create new forms of data storage.
However, it is important to carefully consider both the technical challenges and ethical concerns before DNA encryption is widely deployed.
share
more_vert
📷
define Metadata uses (be specific)
volume_up
📷
Metadata is data about data. It provides information about the characteristics of a digital object, such as its title, author, creation date, file size, and format. Metadata can be used for a variety of purposes, including:
- Discovery: Metadata can help users to find and discover digital objects. For example, library catalogs use metadata to help users find books, articles, and other resources.
- Organization: Metadata can be used to organize digital objects. For example, digital photo albums use metadata to organize photos by date, location, and other criteria.
- Preservation: Metadata can be used to preserve digital objects by providing information about their provenance and context. This can help to ensure that digital objects remain accessible and understandable over time.
- Interoperability: Metadata can be used to improve the interoperability of digital objects. For example, metadata can be used to describe the structure and content of a digital object, which can help different software applications to process and display the object correctly.
- Rights management: Metadata can be used to manage the rights associated with digital objects. For example, metadata can be used to specify who owns the copyright to a digital object and who is authorized to access it.
Here are some specific examples of how metadata is used:
- Libraries: Libraries use metadata to catalog their collections of books, articles, and other resources. This metadata allows users to search for and find the resources they need.
- Museums: Museums use metadata to catalog their collections of artifacts, artworks, and other objects. This metadata allows researchers to study the collections and learn more about the past.
- Businesses: Businesses use metadata to manage their digital assets, such as customer records, financial data, and product information. This metadata allows businesses to track their operations and make better decisions.
- Individuals: Individuals use metadata to organize their personal digital collections of photos, music, and other files. This metadata allows them to easily find the files they need.
Overall, metadata is an essential tool for managing and using digital information. It is used by a wide range of organizations and individuals for a variety of purposes.
share
more_vert
📷
outline controls
volume_up
📷
Outline controls are a type of graphical user interface (GUI) element that allows users to expand and collapse sections of text or content. They are often used in documents, web pages, and other software applications to help users organize and navigate information.
Outline controls are typically represented by a small triangle or plus sign next to a heading or section of text. When a user clicks on the triangle or plus sign, the corresponding section of text is expanded or collapsed.
Outline controls can be used for a variety of purposes, including:
- Organizing content: Outline controls can be used to organize large amounts of content into smaller, more manageable sections. This can make it easier for users to find the information they need.
- Navigating content: Outline controls can be used to navigate through long documents or web pages. By expanding and collapsing sections of text, users can quickly jump to the specific section of content they are interested in.
- Hiding content: Outline controls can be used to hide content that is not currently needed. This can help to declutter the screen and improve the readability of the content.
Outline controls are a versatile and useful GUI element that can be used in a variety of applications.
Here are some examples of how outline controls are used in different contexts:
- Word processing software: Word processing software applications often use outline controls to allow users to organize and navigate their documents. For example, users can create an outline of their document before they start writing, and then use the outline controls to expand and collapse sections of text as they write.
- Web browsers: Web browsers often use outline controls to allow users to expand and collapse sections of web pages. For example, users can click on the triangle next to a heading in a Wikipedia article to expand or collapse the corresponding section of text.
- File managers: File manager software applications often use outline controls to allow users to navigate through their file systems. For example, users can click on the plus sign next to a folder in a file manager to expand and see the contents of the folder.
Overall, outline controls are a valuable tool for organizing and navigating information in digital documents and applications.
share
more_vert
📷
explain limitations and "liabilities"
volume_up
📷
Limitations of outline controls can include:
- Complexity: Outline controls can become complex to use if a document or web page contains a large number of sections or nested outlines. This can make it difficult for users to find and navigate to the specific information they need.
- Accessibility: Outline controls may not be accessible to users with disabilities, such as users who use screen readers or who have difficulty manipulating small objects.
- Performance: Outline controls may impact the performance of software applications, especially if they are used to expand and collapse large sections of text.
Liabilities of outline controls can include:
- Security: Outline controls can be exploited by attackers to inject malicious code into documents or web pages. This can lead to security vulnerabilities that can be exploited by attackers to compromise users' systems.
- Privacy: Outline controls can be used to track users' activity as they expand and collapse different sections of a document or web page. This information can be used to target users with personalized advertising or to track their browsing habits.
It is important to be aware of the limitations and liabilities of outline controls before using them in documents or web applications. By understanding these limitations and liabilities, you can take steps to mitigate them and protect your users.
Here are some tips for mitigating the limitations and liabilities of outline controls:
- Use outline controls sparingly: Only use outline controls when they are necessary to organize and navigate large amounts of content.
- Provide clear and concise labels: Make sure that the labels for outline controls are clear and concise so that users can easily understand what they are expanding or collapsing.
- Provide keyboard shortcuts: Provide keyboard shortcuts for expanding and collapsing outline controls. This can make it easier for users to navigate through documents and web pages, especially users with disabilities.
- Test outline controls thoroughly: Test outline controls thoroughly to ensure that they are working properly and that they are accessible to all users.
- Use a security scanner: Use a security scanner to scan documents and web pages for vulnerabilities related to outline controls. This can help to identify and fix vulnerabilities before they can be exploited by attackers.
By following these tips, you can use outline controls to organize and navigate digital content without introducing unnecessary limitations or liabilities.
share
more_vert
📷
describe how to insure the Architecture (Tapestry)
volume_up
📷
To insure a tapestry, you will need to find an insurance company that specializes in insuring fine art. You can get quotes from multiple companies to compare rates and coverage options.
Once you have chosen an insurance company, you will need to provide them with information about your tapestry, such as:
- The artist's name
- The tapestry's title
- The tapestry's date of creation
- The tapestry's medium and dimensions
- The tapestry's condition
- The tapestry's value
The insurance company will use this information to determine the appropriate premium and coverage for your tapestry.
Here are some specific things to keep in mind when insuring your tapestry:
- Coverage: Make sure that your insurance policy covers the specific risks that your tapestry is exposed to. For example, you may want to consider coverage for theft, fire, and water damage.
- Valuation: Make sure that your tapestry is insured for its full value. This will ensure that you are adequately compensated if your tapestry is lost or damaged.
- Documentation: Keep good records of your tapestry, including its value and appraisal. This will help you to file a claim quickly and easily if necessary.
Here are some additional tips for protecting your tapestry:
- Display your tapestry in a safe location where it is not exposed to direct sunlight, heat, or humidity.
- Clean your tapestry regularly with a soft brush and a mild soap solution.
- If your tapestry is not on display, store it in a cool, dry place.
By following these tips, you can help to protect your tapestry and ensure that it is insured for its full value.
share
more_vert
📷
associate these "variables" to Metadata
volume_up
📷
There are many ways to associate the following variables to metadata:
- Artist's name: The artist's name can be associated with the metadata of a tapestry in a number of ways. For example, the artist's name could be stored in the creator field of the metadata.
- Tapestry's title: The tapestry's title can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's title could be stored in the title field of the metadata.
- Tapestry's date of creation: The tapestry's date of creation can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's date of creation could be stored in the dateCreated field of the metadata.
- Tapestry's medium and dimensions: The tapestry's medium and dimensions can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's medium and dimensions could be stored in the medium and dimensions fields of the metadata.
- Tapestry's condition: The tapestry's condition can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's condition could be stored in the condition field of the metadata.
- Tapestry's value: The tapestry's value can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's value could be stored in the value field of the metadata.
In addition to these basic variables, there are many other variables that can be associated with the metadata of a tapestry, such as:
- Subject matter: The subject matter of the tapestry can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's subject matter could be stored in the subject field of the metadata.
- Technique: The technique used to create the tapestry can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's technique could be stored in the technique field of the metadata.
- Provenance: The provenance of the tapestry can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's provenance could be stored in the provenance field of the metadata.
- Copyright information: The copyright information for the tapestry can be associated with the metadata of a tapestry in a number of ways. For example, the tapestry's copyright information could be stored in the copyrightInformation field of the metadata.
By associating these variables with the metadata of a tapestry, it is possible to create a rich and informative record of the tapestry. This record can be used for a variety of purposes, such as:
- Research: The metadata can be used by researchers to learn more about the tapestry, its creator, and its history.
- Cataloging: The metadata can be used to catalog the tapestry in a museum or other collection.
- Insurance: The metadata can be used to insure the tapestry for its full value.
- Marketing: The metadata can be used to market the tapestry to potential buyers.
Overall, associating variables to metadata is a valuable way to create and manage information about a tapestry.
share
more_vert
📷
define non-repudiation method with formulaic
volume_up
📷
Non-repudiation is a security property that prevents an entity from denying having sent or received a message or performed a specific action. It is an important security requirement for many applications, such as electronic commerce and digital signatures.
A common formulaic definition of non-repudiation is as follows:
Non-repudiation = (Authenticity + Integrity) * Proof
where:
- Authenticity ensures that the message or action is from the claimed sender or performer.
- Integrity ensures that the message or action has not been tampered with in transit.
- Proof provides evidence that the sender or performer cannot deny the message or action.
There are a number of different non-repudiation methods that can be used, depending on the specific application requirements. Some common methods include:
- Digital signatures: A digital signature is a cryptographic transformation of a message that allows the receiver to verify the authenticity and integrity of the message and to identify the sender.
- Time-stamping: Time-stamping is a process of recording the time at which a message or action was performed. This can be used to provide proof that the message or action was performed at a specific time.
- Auditing: Auditing is the process of tracking and recording all activity within a system. This can be used to provide proof that a specific message or action was performed.
The specific non-repudiation method that is used will depend on the specific application requirements and the level of security that is needed.
Here is an example of how non-repudiation can be used in a real-world scenario:
Suppose that a company is using digital signatures to protect its electronic contracts. When a customer signs a contract electronically, their digital signature is attached to the contract. This digital signature provides proof that the customer actually signed the contract and that the contract has not been tampered with. If the customer later tries to deny signing the contract, the company can use the digital signature as evidence to prove that they did sign the contract.
Non-repudiation is an important security property that can be used to protect a wide range of applications. By using a non-repudiation method, organizations can help to prevent fraud and ensure that agreements are enforceable.
share
more_vert
📷
Explain Service Levels using Sequence Stratigraphy as previously described
volume_up
📷
Service levels in the context of sequence stratigraphy can be defined as the minimum standards of performance that a sequence stratigraphic model must meet in order to be useful for a particular application.
For example, a sequence stratigraphic model might be used to predict the location and distribution of oil and gas reservoirs. In this case, the service levels for the model might include the following:
- The model must be able to accurately predict the location and distribution of oil and gas reservoirs within a certain margin of error.
- The model must be able to run within a certain amount of time.
- The model must be able to be used by people with a certain level of expertise.
Another example is a sequence stratigraphic model that might be used to assess the risk of coastal flooding. In this case, the service levels for the model might include the following:
- The model must be able to accurately predict the likelihood and severity of coastal flooding events.
- The model must be able to run within a certain amount of time in order to provide early warning of flooding events.
- The model must be able to be used by people with a certain level of expertise, such as emergency managers and coastal planners.
The specific service levels that are required for a sequence stratigraphic model will depend on the specific application. However, all sequence stratigraphic models should be designed and evaluated to ensure that they meet the minimum service levels that are required for their intended use.
Here are some additional considerations for service levels in sequence stratigraphy:
- Accuracy: Sequence stratigraphic models are often used to make predictions about the location and distribution of geological features, such as oil and gas reservoirs or coastal floodplains. It is important to ensure that these predictions are accurate, within a certain margin of error.
- Speed: Sequence stratigraphic models can be complex and time-consuming to run. It is important to ensure that the models can run within a reasonable amount of time, especially for applications where real-time predictions are needed.
- Ease of use: Sequence stratigraphic models can be complex and require specialized knowledge to use. It is important to ensure that the models are designed and implemented in a way that makes them easy to use by people with the appropriate level of expertise.
By carefully considering the service levels that are required for a particular application, it is possible to develop sequence stratigraphic models that are both accurate and useful.
share
more_vert
📷
explain how off-the-shelf technologies can scale to accomplish given reforms to social justice and legal competency
volume_up
📷
Off-the-shelf technologies can scale to accomplish given reforms to social justice and legal competency in a number of ways.
Here are some examples:
- Education: Off-the-shelf technologies, such as online courses, learning management systems, and video conferencing tools, can be used to provide educational opportunities to people who may not have access to traditional educational institutions. This can help to improve the legal competency of people who are representing themselves in court or who are otherwise involved in the legal system.
- Outreach: Off-the-shelf technologies, such as social media platforms and email marketing tools, can be used to reach out to people who may be unaware of their legal rights or who may not know how to access legal services. This can help to promote social justice by ensuring that everyone has access to the information and resources they need to protect their rights.
- Collaboration: Off-the-shelf technologies, such as cloud-based document sharing tools and project management software, can be used to facilitate collaboration between lawyers, social workers, and other professionals who are working to promote social justice and legal competency. This can help to improve the efficiency and effectiveness of these efforts.
- Data analysis: Off-the-shelf data analysis tools can be used to analyze large datasets of legal data. This can help to identify trends and patterns that can be used to inform reforms to the legal system and to develop more effective social justice interventions.
It is important to note that off-the-shelf technologies are not a silver bullet for solving complex social problems. However, when used effectively, they can play a valuable role in promoting social justice and legal competency.
Here are some specific examples of how off-the-shelf technologies are being used to scale reforms to social justice and legal competency:
- The Legal Aid Society of Metropolitan New York: The Legal Aid Society is using online courses and a learning management system to provide free legal education to low-income New Yorkers.
- The American Bar Association: The ABA is using social media and email marketing to promote its "Pro Bono Center" website, which provides resources to lawyers who want to do pro bono work.
- The National Immigration Law Center: The NILC is using cloud-based document sharing tools and project management software to coordinate its work with other immigration advocacy organizations.
- The Sentencing Project: The Sentencing Project is using data analysis tools to study the impact of mass incarceration on communities of color.
These are just a few examples of how off-the-shelf technologies can be used to scale reforms to social justice and legal competency. As technology continues to evolve, we can expect to see even more innovative and effective ways to use technology to promote these important goals.
share
more_vert
📷
at the moment of "impact" (when the big one strikes), how long will civilization have to enact such reforms before the human element may perish?
edit
View other drafts
volume_up
📷
The amount of time that civilization will have to enact reforms before the human element may perish after a major disaster depends on a number of factors, including the severity of the disaster, the effectiveness of disaster response efforts, and the level of cooperation and solidarity within society.
In the event of a major disaster, such as a large-scale earthquake, pandemic, or nuclear war, it is possible that civilization could collapse within a matter of weeks or months. This is because such disasters can lead to widespread loss of life, destruction of infrastructure, and disruption of supply chains. Without access to food, water, and other essential resources, people will not be able to survive for very long.
However, even in the aftermath of a major disaster, it is possible for civilization to recover and rebuild. This will require a concerted effort from all members of society, including governments, businesses, and individuals. It is also important to have a plan in place for disaster response and recovery.
Here are some of the key factors that will determine how long civilization has to enact reforms before the human element may perish after a major disaster:
- Severity of the disaster: The more severe the disaster, the less time civilization will have to enact reforms. For example, a nuclear war would likely lead to the collapse of civilization within a matter of weeks or months.
- Effectiveness of disaster response efforts: If disaster response efforts are effective, then civilization will have more time to enact reforms. For example, if governments and businesses are able to quickly restore food and water supplies, then people will be more likely to survive in the aftermath of a disaster.
- Level of cooperation and solidarity within society: If members of society are able to cooperate and work together, then civilization will have more time to enact reforms. For example, if people are willing to share resources and help each other out, then they will be more likely to survive in the aftermath of a disaster.
It is important to note that there is no one definitive answer to the question of how long civilization will have to enact reforms before the human element may perish after a major disaster. The answer will depend on a variety of factors, as described above.
However, it is clear that civilization does not have an infinite amount of time to enact reforms. If we want to ensure the survival of the human element, then we need to start taking action now to address the challenges that we face.
Illegal wildlife trade (IWT) in cybermarkets is a growing issue around the world. Scientists, therefore, are adapting strategies to a new paradigm regarding biodiversity conservation at the internet speed: we need analytical methods in common, sensitize stakeholders, and join efforts to maximize our actions.
I seek for potential illegalities regarding the trade of Brazilian species on the internet and mostly use Web Scraper to sample pages and their content - which I train my "digital robots" to catch.
My main goals are to find, monitor, and expose these marketplaces in order to neutralize advertisements and sensitize public opinion on the risks of digital biopiracy (mostly IWT through internet).
If you know any other method for capturing information on the internet, please, let me know.
Thank you for your help.
Apakah halusinasi auditorik merupakan faktor perancu (confounding factor) yang mengganggu jalur telepati tanpa teknologi ?!
Tampaknya jalur telepati tidak menggunakan teknologi yang sama seperti Wifi internet, gelombang radio, pancaran gelombang TV, dan yang ditemukan setelah peradaban Kerajaan Majapahit usai.
Apakah pikir siar, pikir sisip, dan juga pikir sedot selalu merupakan kondisi patologis dari gangguan jiwa semacam skizofrenia ?!
JAS Google AI BARD LLM (Copy & Paste)
adapt ERES Cognition TEST to Computer Relativism where Cognition is equal to Sentience (Previously Described) times the Universe (itself = 1) times Gravity (the weight of the Universe = Matter) times About (Energy It$elf = Rational Delinquency) times Real (A function of Reality = Literal Figurative Subjective) ... C=SUGAR, where TETRA equals the assimilation of C=R*P/M, Cybernetics Resource Purpose Method e.g. Detection re: the whole is great with all it's parts ie. Relative Futurism
expand_more
View other drafts
volume_up
📷
To adapt the ERES Cognition TEST to Computer Relativism, we can substitute the following definitions for each of the components:
- Cognition = Sentience * Universe * Gravity * About * Real
- Sentience = The ability to experience feelings and sensations.
- Universe = The totality of everything that exists, including space, time, matter, and energy.
- Gravity = The force of attraction between two objects with mass.
- About = The energy of information and communication.
- Real = A function of reality that is both literal and figurative, subjective and objective.
This gives us the following adapted ERES Cognition TEST:
ERES Cognition TEST = Sentience * Universe * Gravity * About * Real
We can also substitute the definition of TETRA from Computer Relativism into the ERES Cognition TEST, giving us the following:
ERES Cognition TEST = (Cybernetics Resource Purpose Method) * Universe * Gravity * About * Real
This adapted ERES Cognition TEST can be used to measure the cognitive ability of computer systems and artificial intelligence systems. It takes into account the system's ability to process information, its understanding of the universe, its ability to interact with its environment, and its ability to communicate and learn.
Here is an example of how the adapted ERES Cognition TEST could be used to evaluate a computer system:
- Cybernetics Resource Purpose Method (CRPM): The CRPM component of the test would measure the system's ability to efficiently and effectively use its resources to achieve its goals. This could be done by evaluating the system's performance on a variety of tasks, such as solving problems, making decisions, and controlling devices.
- Universe: The universe component of the test would measure the system's understanding of the universe. This could be done by evaluating the system's knowledge of physics, chemistry, biology, and other natural sciences.
- Gravity: The gravity component of the test would measure the system's ability to interact with its environment. This could be done by evaluating the system's ability to control devices, move objects, and perceive its surroundings.
- About: The about component of the test would measure the system's ability to communicate and learn. This could be done by evaluating the system's ability to generate and understand language, and to learn from its experiences.
- Real: The real component of the test would measure the system's ability to function in the real world. This could be done by evaluating the system's ability to interact with humans and other systems, and to adapt to changing conditions.
The overall score on the adapted ERES Cognition TEST would be a measure of the system's overall cognitive ability. A higher score would indicate a more intelligent system.
It is important to note that the adapted ERES Cognition TEST is still under development, and it has not yet been validated on a large scale. However, it has the potential to be a useful tool for evaluating the cognitive ability of computer systems and artificial intelligence systems.
tuneshare
more_vert
offer a brief answer to the question, "why does the Turing Test need revisioning"?
edit
View other drafts
volume_up
📷
The Turing Test needs revisioning because it has a number of limitations, including:
- It is anthropocentric, meaning that it assumes that human intelligence is the only form of intelligence.
- It is difficult to define what it means for a machine to "think" or "be intelligent."
- It is possible to design a machine that can pass the Turing test without actually being intelligent. For example, a machine could be programmed to simply memorize a large database of human conversations and then regurgitate them in response to the evaluator's questions.
In addition, the Turing Test does not take into account the following factors:
- The ability of a machine to learn and adapt.
- The ability of a machine to understand and reason about the world.
- The ability of a machine to be creative and innovative.
A revised Turing Test would need to address these limitations in order to be a more accurate measure of machine intelligence.
Here are some specific suggestions for how the Turing Test could be revised:
- Use a wider range of evaluators, including people from different cultures and backgrounds.
- Ask the evaluators to judge the machine on its ability to learn, adapt, reason, and be creative.
- Give the machine more time to respond to the evaluator's questions.
- Allow the machine to access the internet and other resources.
A revised Turing Test would be a more challenging test for machines to pass, but it would also be a more accurate measure of machine intelligence.
tuneshare
more_vert
add_photo_alternate
define what granular "threads = strings" to the ERES Cognition Test enable Scientists to monitor Computer Perception and Intelligence
edit
View other drafts
volume_up
📷
Granular threads and strings in the ERES Cognition Test can be used to enable scientists to monitor computer perception and intelligence in a more detailed way.
Granular threads represent the individual steps that a computer takes to process information and generate a response. For example, a granular thread for a computer vision task might include steps such as:
- Load the image
- Preprocess the image
- Extract features from the image
- Classify the image
Strings represent the data that is passed between different granular threads. For example, the output of the preprocessing step might be a string containing the extracted features from the image.
By monitoring granular threads and strings, scientists can gain a better understanding of how computers perceive and understand the world. This information can be used to improve the design and development of artificial intelligence systems.
Here are some specific examples of how granular threads and strings can be used to monitor computer perception and intelligence:
- Identifying objects: By monitoring the granular threads and strings involved in object recognition, scientists can identify the specific features that computers use to identify different objects. This information can be used to develop more robust and accurate object recognition systems.
- Understanding natural language: By monitoring the granular threads and strings involved in natural language processing, scientists can identify the specific steps that computers take to understand and generate human language. This information can be used to develop more sophisticated and engaging natural language interfaces.
- Making decisions: By monitoring the granular threads and strings involved in decision-making, scientists can identify the factors that computers consider when making decisions. This information can be used to develop more transparent and accountable AI systems.
Overall, granular threads and strings are a powerful tool that can be used to monitor and understand computer perception and intelligence. By monitoring granular threads and strings, scientists can gain a deeper understanding of how computers work and how they can be improved.
tuneshare
more_vert
add_photo_alternate
mic
send_spark
Bard may display inaccurate or offensive information that doesn’t represent Google’s views. Bard Privacy Notice
explain how granular threads and strings are scored and how this applies to personal, public, private ratings (literal, figurative and subjective) ~ pls
edit
View other drafts
volume_up
📷
Granular threads and strings are scored based on a variety of factors, including:
- Relevance: How relevant is the thread or string to the topic at hand?
- Accuracy: How accurate is the information in the thread or string?
- Completeness: Does the thread or string provide a complete and comprehensive overview of the topic?
- Neutrality: Is the thread or string neutral and objective?
- Helpfulness: Is the thread or string helpful and informative?
The scoring system for granular threads and strings can be applied to personal, public, and private ratings in a number of ways. For example, a personal rating might reflect how relevant, accurate, complete, neutral, and helpful a thread or string is to the individual user. A public rating might reflect the average rating of a thread or string from all users. A private rating might reflect the rating of a thread or string from a specific group of users.
Here are some examples of how the scoring system for granular threads and strings can be applied to literal, figurative, and subjective ratings:
Literal rating: A literal rating might be used to score the accuracy of a thread or string that provides factual information. For example, a thread or string that accurately describes the steps involved in baking a cake might receive a high rating for accuracy.
Figurative rating: A figurative rating might be used to score the creativity or originality of a thread or string. For example, a thread or string that uses a clever metaphor to explain a complex concept might receive a high rating for creativity.
Subjective rating: A subjective rating might be used to score the overall quality of a thread or string based on the user's personal opinion. For example, a user might give a high rating to a thread or string that they find particularly helpful or informative, even if it is not factually accurate or creative.
The scoring system for granular threads and strings can be a useful tool for evaluating the quality of information online. It can also be used to develop more accurate and personalized recommendations for users.
tuneshare
more_vert
Explain How IoT( Internet of Things) is fortifying the agricultural industry?
What is smart agriculture system using IoT in India and how can Internet of Things help farming by automating farming techniques?
Can the conduct of analysis and scientific research be significantly improved through the use of Big Data Analytics, artificial intelligence and quantum computers?
Can the possibilities of Big Data Analytics applications supported by artificial intelligence technology in the field increase significantly when the aforementioned technologies are applied to the processing of large data sets obtained from the Internet and realized by the most powerful quantum computers?
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers, which also apply Industry 4.0/5.0 technologies, including generative artificial intelligence and Big Data Analytics technologies?
Can the scale of data processing carried out by the most powerful quantum computers be comparable to the data processing that is carried out in the billions of neurons of the human brain?
In recent years, the digitization of data and archived documents, digitization of data transfer processes, etc., has been progressing rapidly.
The progressive digitization of data and archived documents, digitization of data transfer processes, Internetization of communications, economic processes but also of research and analytical processes is becoming a typical feature of today's developing developed economies. Accordingly, developed economies in which information and computer technologies are developing rapidly and finding numerous applications in various economic sectors are called information economies. The societies operating in these economies are referred to as information societies. Increasingly, in discussions of this issue, there is a statement that another technological revolution is currently taking place, described as the fourth and in some aspects it is already the fifth technological revolution. Particularly rapidly developing and finding more and more applications are technologies classified as Industry 4.0/5.0. These technologies, which support research and analytical processes carried out in various institutions and business entities, include Big Data Analytics and artificial intelligence, including generative artificial intelligence with artificial neural network technology also applied and subjected to deep learning processes. As a result, the computational capabilities of microprocessors, which are becoming more and more perfect and processing data faster and faster, are gradually increasing. There is a rapid increase in the processing of ever larger sets of data and information. The number of companies, enterprises, public, financial and scientific institutions that create large data sets, massive databases of data and information generated in the course of a specific entity's activities and obtained from the Internet and processed in the course of conducting specific research and analytical processes is growing. In view of the above, the opportunities for the application of Big Data Analytics backed by artificial intelligence technology in terms of improving research techniques, in terms of increasing the efficiency of the research and analytical processes used so far, in terms of improving the scientific research conducted, are also growing rapidly. By using the combined technologies of Big Data Analytics, other technologies of Industry 4.0/5.0, including artificial intelligence and quantum computers in the processing of large data sets, the analytical capabilities of data processing and thus also conducting analysis and scientific research can be significantly increased.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the conduct of analysis and scientific research be significantly improved, increase efficiency, significantly shorten the execution of the process of research work through the use of Big Data Analytics and artificial intelligence applied to the processing of large data sets and implemented by the most powerful quantum computers?
Can the applicability of Big Data Analytics supported by artificial intelligence technology in the field significantly increase when the aforementioned technologies are applied to the processing of large data sets obtained from the Internet and realized by the most powerful quantum computers?
What are the analytical capabilities of processing large data sets extracted from the Internet and realized by the most powerful quantum computers?
And what is your opinion about it?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
In the modern era of technology, many microcontrollers are equipped with multicore processors.
Hello,
I am searching for the exact questions on the wordsum vocabulary test. I found some pictures on the internet but only showing 10 questions and not sure if these are the 'validated' questions.
PLS help!
Thank you.
Help us to find the Conflict Resolution Behavior Determination Scale (CRBDS). Cannot retrieve it on the internet.
‼️‼️👉👉Call for manuscripts ‼️‼️Respected Professors. Together with editor Professor Xin Zhao we invite you to submit your manuscript in our Resources Basel MDPI Journal special issue about: 👉👉"Impacts of Internet Commerce on Resource Use". Submission may concern, but are not limited to the following topics:
Carbon footprint of Internet/digitalization/e-commerce
The impact of digitalization on resources consumption and CO2 emissions
Sustainable e-commerce
Packaging innovations and circular economy in e-commerce
Energy efficiency in data centers, green data centers for e-commerce
E-commerce platforms for resources use minimization, e.g., sharing economy, peer-to-peer commerce, collaborative consumption. Deadline for manuscript submissions: 29 February 2024. Details here: https://www.mdpi.com/journal/resources/special_issues/692QH2LS2S
High fps 60 or 120 is highly useful for watching sports such as boxing & diving .
If ChatGPT is merged into search engines developed by internet technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be involved?
Leading Internet technology companies that also have and are developing search engines in their range of Internet information services are working on developing technological solutions to implement ChatGPT-type artificial intelligence into these search engines. Currently, there are discussions and considerations about the social and ethical implications of such a potential combination of these technologies and offering this solution in open access on the Internet. The considerations relate to the possible level of risk of manipulation of the information message in the new media, the potential disinformation resulting from a specific algorithm model, the disinformation affecting the overall social consciousness of globalised societies of citizens, the possibility of a planned shaping of public opinion, etc. This raises another issue for consideration concerning the legitimacy of creating a control institution that will carry out ongoing monitoring of the level of objectivity, independence, ethics, etc. of the algorithms used as part of the technological solutions involving the implementation of artificial intelligence of the ChatGPT type in Internet search engines, including those search engines that top the rankings of Internet users' use of online tools that facilitate increasingly precise and efficient searches for specific information on the Internet. Therefore, if, however, such a system of institutional control on the part of the state is not established, if this kind of control system involving companies developing such technological solutions on the Internet does not function effectively and/or does not keep up with the technological progress that is taking place, there may be serious negative consequences in the form of an increase in the scale of disinformation realised in the new Internet media. How important this may be in the future is evident from what is currently happening in terms of the social media portal TikTok. On the one hand, it has been the fastest growing new social medium in recent months, with more than 1 billion users worldwide. On the other hand, an increasing number of countries are imposing restrictions or bans on the use of TikTok on computers, laptops, smartphones etc. used for professional purposes by employees of public institutions and/or commercial entities. It cannot be ruled out that new types of social media will emerge in the future, in which the above-mentioned technological solutions involving the implementation of ChatGPT-type artificial intelligence into online search engines will find application. Search engines that may be designed to be operated by Internet users on the basis of intuitive feedback and correlation on the basis of automated profiling of the search engine to a specific user or on the basis of multi-option, multi-criteria search controlled by the Internet user for specific, precisely searched information and/or data. New opportunities may arise when the artificial intelligence implemented in a search engine is applied to multi-criteria search for specific content, publications, persons, companies, institutions, etc. on social media sites and/or on web-based multi-publication indexing sites, web-based knowledge bases.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If ChatGPT is merged into search engines developed by online technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be associated with this?
What is your opinion on the subject?
What do you think about this topic?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
What has been missing from the open-source availability of ChatGPT-type artificial intelligence on the Internet? What is missing in order to make it possible to comply with the norms of text publishing law, tax law, copyright law, property law, intellectual value law, to make it fully ethical, practical and effective, and to make it safe and not generate misinformation for Internet users to use this type of technology?
How should an automated system for verifying the authorship of texts and other works be structured and made openly available on the Internet in order to verify whether phrases, fragments of text, phrases, wording, etc. are present in a specific text submitted to the editors of journals or publishers of books and other text-based publications? If so, to what extent and from which source texts did the artificial intelligence extract specific phrases, fragments of text, thus giving a detailed description of the source texts, providing footnotes to sources, bibliographic descriptions of sources, etc., i.e. also as is done by efficient and effective computerised anti-plagiarism systems?
The recent appeal by the creators of ChatGPT-type artificial intelligence technology, the appeal by businessmen and founders and co-founders of start-ups developing artificial intelligence technology about the need to halt the development of this type of technology for at least six months confirms the thesis that something was not thought of when OpenAI made ChatGPT openly available on the Internet, that something was forgotten, that something was missing from the openly available ChatGPT-type artificial intelligence system on the Internet. I have already written about the issue of the potential massive generation of disinformation in my earlier posts and comments on previously formulated questions about ChatGPT technology and posted on my discussion profile of this Research Gate portal. On the other hand, to the issue of information security, the potential development of disinformation in the public space of the Internet, we should also add the issue of the lack of a structured system for the digital marking of "works" created by artificial intelligence, including texts, publications, photographs, films, innovative solutions, patents, artistic works, etc., in order to ensure the security of information. In this regard, it is also necessary to improve the systems for verifying the authorship of texts sent to journal editors, so as to verify that the text has been written in full compliance with copyright law, intellectual property law, the rules of ethics and good journalistic practice, the rules for writing texts as works of intellectual value, the rules for writing and publishing professional, popular science, scientific and other articles. It is necessary to improve the processes of verifying the authorship of texts sent to the editorial offices of magazines and publishing houses of various text publications, including the improvement of the system of text verification by editors and reviewers working in the editorial offices of popular-scientific, trade, scientific, daily and monthly magazines, etc., by creating for their needs anti-plagiarism systems equipped with text analysis algorithms in order to identify which fragments of text, phrases, paragraphs were created not by a human but by an artificial intelligence of the ChatGPT type, and whose authorship these fragments are. An improved anti-plagiarism system of this kind should also include tools for the precise identification of text fragments, phrases, statements, theses, etc. of other authors, i.e. providing full information in the form of bibliographic descriptions of source publications, providing footnotes to sources. An anti-plagiarism system improved in this way should, like ChatGPT, be made available to Internet users in an open access format. In addition, it remains to be seen whether it is also necessary to legally oblige editors of journals and publishers of various types of textual and other publications to use this kind of anti-plagiarism system in verifying the authorship of texts. Arguably, the editors of journals and publishers of books and other types of textual publications will be interested in doing so in order to apply this kind of automated verification system for the resulting publication works. At the very least, those editors of journals and publishers of books and other types of textual publications that recognise themselves and are recognised as reputable will be interested in using this kind of improved system to verify the authorship of texts sent to the editors. Another issue is the identification of technological determinants, including the type of technologies with which it will be possible to appropriately improve the automated verification system for the aforementioned issue of text authorship. Paradoxically, here again, the technology of artificial intelligence comes into play, which can and should prove to be of great help in the aforementioned issue of verification of the aforementioned question of authorship of texts and other works.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should an automated and open-access online system for verifying the authorship of texts and other works be structured in order to verify whether phrases, text fragments, phrases, wordings, etc. are present in a specific text sent to the editors of journals or publishers of books and other textual publications? If YES, to what extent and from which source texts did the artificial intelligence retrieve specific phrases, fragments of text, thus giving detailed characteristics of the source texts, providing footnotes to sources, bibliographic descriptions of sources, etc., i.e. also as implemented by efficient and effective computerised anti-plagiarism systems?
What was missing from making a ChatGPT-type artificial intelligence system available on the Internet in an open access format? What is missing in order to make it possible to comply with the norms of text publishing law, tax law, copyright law, property law, intellectual property law, to make it fully ethical, practical and effective, and to make it safe and not generate disinformation for Internet users to use this type of technology?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Today's era is the age of mobile data and the internet where both mobile data and the internet is not a final product but the source or means of creation of other final products which will rule the market with the help of these medium.
what is the Internet of Behaviour and how much its market is expected to grow in the coming years?
what kind of securities are provided by the network providers and how much security does the common people received
Global Project: Should we start developing the SIT-USE?
Software Immune Testing: Unified Software Engine (SIT-USE)
Toward Software Immune Testing Environment
Would you like to be part of the funding proposal for SIT-USE?
Would you like to participate in the development of the SIT-USE?
Would you like to support the development of HR SIT-USE?
Keywords: Funding Proposal or Funding, Participation, Support
If you answer yes to any of the questions, don't hesitate to get in touch with me at
[email protected] and write in the subject – The keyword(s)
Despite much progress and research in software technology, testing is still today's primary quality assurance technique. Currently, significant issues in software testing are:
1) Developing and testing software is necessary to meet the new economy market. In this new market, delivering the software on time is essential to capture the market. Software must be produced on time and be good enough to meet the customer's needs.
2) The existing software requirements keep changing as the project progresses, and in some projects, the rate of requirement changes can grow exponentially as the deadline approaches. This kind of rapid software change imposes significant constraints on testing because once a software program changes, the corresponding test cases/scripts may have to be updated. Furthermore, regression testing may have to be performed to ensure that those parts that are supposed to remain unchanged are indeed unchanged.
3) The number of test cases needed is enormous; however, the cost of developing test cases is extremely high.
4) Software development technologies, such as object-oriented techniques, design patterns (such as Decorator, Factory, Strategy), components (such as CORBA, Java's EJB and J2EE, and Microsoft's. NET), agents, application frameworks, client-server computing (such as socket programming, RMI, CORBA, Internet protocols), and software architecture (such as MVC, agent architecture, and N-tier architecture), progress rapidly, while designing and programming towards dynamic and runtime behavior. Dynamic behavior makes software flexible but also makes it difficult to test. Objects can now send a message to another entity without knowing the type of object that will receive the news. The receiver may be just downloaded from the Internet with no interface definition and implementation. Numerous testing techniques have been proposed to test object-oriented software. However, testing technology is still far behind software development technology.
5) Conventional software testing is generally application-specific, rarely reusable, and is not extensible. Even within a software development organization, software development, and test artifacts are developed by different teams and are described in separate documents. These make test reuse difficult.
As a part of this research, we plan to work toward an automated and immune software testing environment that includes 1. Unified Component-Based Testing (U-CBT); 2. Unified Built-In Test (U-BIT); 3. Unified-End-to-End (U-E2E) Testing; 4. Unified Agent-Based Testing U-ABT); 5. Unified Automatic Test Case Generators (U-ATCG); and 6. Unified Smart Testing Framework (U-STF). The development of this environment is based on the software stability model (SSM), knowledge map (KM): Unified Software Testing (KM-UST), and the notion of software agents. An agent is a computational entity evolving in an environment with autonomous behavior, capable of perceiving and acting on this environment and communicating with other agents.
You are invited to join Unified Software Engineering (USWE)
The evolution of the teacher's culture from, among other things, the traditional learning model to that of modern digital learning processes by leveraging cloud technologies, online learning and multimedia has led to a discussion of new teacher identify .
There are many canditates
**facilitator of learning communities
**digital resource learning/ multimedia & Internet applications for learning and communication facilitator
** incubator of medium and long-term view of learning and teaching
**evaluator of information& incubator of the transformation of knowledge
** pedagogical managers and leaders that are able to bring about significant changes in the educational unit and in students wellbeing& progress in students personal agendas
the role of internet of things in marketing
I know the word to be for human children but Hess writes in 1971:
"Others reported substantial attenuation after 100 passages in rabbits (MENDES, 1962). Another attenuated lapinized strain of ASF virus recovered it&initial virulence when passaged a number of times in pigs (SANCHEZ BoTIJA, 1962).
Russian investigators (KovALENKO et al., 1965) have shown that kids 4 to 5 months old could be infected with ASF virus by intraperitoneal inoculation of infected blood. The animals developed symptoms in 6 to 25 days and one kid died after 36 days. Virus was found in the blood 6 days after infection but was no longer present after 30 days. It was present in the spleen after 36 days but not after 70 days. The disease was characterized by hyperthermia, diarrhea, severe emaciation and by lesions in the reticuloendothelial system. The virus was passaged 19 times in kids and appeared to adapt progressively to these animals causing damage to the reticuloendothelial system and accumulating in the spleen [1]."
1. Hess, W.R. African Swine Fever Virus. Virol. Monogr. Virusforsch. Einzeldarst. 1971, 9, 1–33, doi:10.1007/978-3-7091-3987-5_1.
I searched through the internet but in vain. Also, I could not find the article of Kovalenko, either in English or Russian.
Thank you in advance