AICooperation.com is for sale!
What is cooperation in AI?
Since advancements in artificial intelligence (AI) have allowed AI to live and work with humans as independent agents, it has become increasingly common for people to cooperate with autonomous AI agents, which is known as human-AI cooperation (HAIC).
With improvements in artificial intelligence technology, a proliferation of robots are now among us, so it's important to remember that humans and machines inevitably have a lot in common. After all, we designed them to do the same things we do and to be capable of learning, just like us. The visceral drive for humans to automate processes using machines is not new. Humans and machines share a long history of working together to continually make improvements in almost every aspect of our lives.
AI is another disruptive technology of our time, and just like its predecessors, it will have a profound impact on our existence.
AI is a set of algorithms designed by humans and expressed through machines that can incorporate the five human senses (seeing, smelling, tasting, hearing and feeling) and ability to communicate (speaking). The very senses that have been historically used to magnify the differences between humans and machines are now enabling machines to effectively handle ever more qualitative operations and analysis.
Recent advancements in machine learning, deep learning and quantum computing has increasingly swelled the desire and ability for people to automate processes. AI and robotic applications enable people to create a new world of accelerated process automation that we could not have achieved without the continued focus on improving intelligent systems and machines.
The History Of Human And Machine Cooperation
Humans and machines have coexisted on a large scale since the Industrial Revolution in the late 1700s. It commenced a period of profound technological achievements that brought on massive social and economic change spanning almost every conceivable industry in the world — from textiles and transportation to printing and consumer goods to healthcare, schools and governments.
It was signified by machines designed by humans to replace human effort and automate repetitive, time-consuming tasks — allowing humankind the freedom to be more creative and use our imaginations to explore new ideas. More importantly, it gave us the ability to improve the quality and productivity of our work and lives and evolve as people. It was also a time of massive disruption, uncertainty and fear. And, conceivably, it marked the time when humans and machines began to coexist and develop an interdependent relationship. And while some existing jobs were eliminated and replaced by machines, new types of jobs, job functions and business models were created.
This symbiotic relationship has been playing out with every major advancement in technology, further deepening the human-machine interdependent relationship. From the transistor that introduced smaller, less expensive computers and computer processing in the late 1940s to the internet in the 1970s to the Fourth Industrial Revolution currently underway, people continue to rely on machines to solve human problems and automate human tasks.
While machine learning or neural networks have been in development since the 1950s by great minds such as Alan Turing and Marvin Minsky, what has changed is advancements in computing performance and data storage that allow us to capture and retain significant amounts of data that can be used to build AI applications. Now deep learning applications and neural networks can mirror and mimic the human brain and, in some cases, outperform our ability to solve problems and take action.
So, how can we rely on a machine that we built if it can surpass our ability to solve problems and take action? And if a machine can act like us, then what differentiates person from machine? For the human race to continue its evolutionary path, the cooperative relationship between person and machine must also remain strong.
by Toby McClean
AI cooperation on the ground:
AI research and development on a global scale Brookings 2022.11.08
The Forum for Cooperation on Artificial Intelligence (FCAI) has investigated opportunities and obstacles for international cooperation to foster development of responsible artificial intelligence (AI). It has brought together officials from seven governments (Australia, Canada, the European Union, Japan, Singapore, the United Kingdom, and United States with experts from industry, academia, and civil society to explore similarities and differences in national policies on AI, avenues of international cooperation, ecosystems of AI research and development (R&D), and AI standards development among other issues. Following a series of roundtables in 2020 and 2021, we issued a progress report in October 2021 that articulated why international cooperation is especially needed on AI, identified significant challenges to such cooperation, and proposed four key areas where international cooperation could deepen: Regulatory alignment, standards development, trade agreements, and joint R&D. The report made 15 recommendations on ways to make progress in these areas.
For joint R&D, recommendation R15 of the progress report called for development of “common criteria and governance arrangements for international large-scale AI R&D projects,” with the Human Genome Project (HGP) and the European Organization for Nuclear Research (CERN) as examples of the scale and ambition needed. The report summarized this recommendation as follows:
“Joint research and development applying to large-scale global problems such as climate change or disease prevention and treatment can have two valuable effects: It can bring additional resources to the solution of pressing global challenges, and the collaboration can help to find common ground in addressing differences in approaches to AI. FCAI will seek to incubate a concrete roadmap on such R&D for adoption by FCAI participants as well as other governments and international organizations. Using collaboration on R&D as a mechanism to work through matters that affect international cooperation on AI policy means that this recommendation should play out in the near term.”
FCAI convened a roundtable on February 10, 2022, to explore specific use cases that may be candidates for joint international research and development and inform selection and design of such projects based on criteria outlined below. Potential areas considered were climate change, public health, privacy-enhancing technologies for sharing data, and improved tracking of economic growth and performance (economic measurement). This working paper distills the discussions and our analysis and research. We recommend that FCAI governments, stakeholders, and other likeminded entities prioritize cooperative R&D efforts on (1) deployment of AI as a tool for climate change monitoring and management and (2) accelerating development and adoption of privacy enhancing technologies (PETs).
Strengthening international cooperation on AI
Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025. At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development. In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. Canada’s directive on the use of AI in government, Singapore’s Model AI Governance Framework, Japan’s Social Principles of Human-Centric AI, and the U.K. guidance on understanding AI ethics and safety have been frontrunners in this sense; they were followed by the U.S. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI. In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the “why”); the issues and policy domains that appear most ready for enhanced collaboration (the “what”); and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the “how”). At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions.