Mr Fix It AMERICA
MrFixIt -MrFixItAmerica
www.MrFixIt.Ai
TJ@MrFixIt.Ai - 405-215-5985
AI Ethics

Ethics of artificial intelligence

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems.[1] It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

Approaches[edit]

Machine ethics[edit]

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[2][3][4][5] To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agencyrational agencymoral agency, and artificial agency, which are related to the concept of AMAs.[6]

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[7] More recently, academics and many governments have challenged the idea that AI can itself be held accountable.[8] A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.[9]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[10]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[11] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[12][13] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[14] They point to programs like the Language Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity".[15] He suggests that it may be somewhat or possibly very dangerous for humans.[16] This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[17]

There are discussions on creating tests to see if an AI is capable of making ethical decisionsAlan Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too low.[18] A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.[18]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[15]

However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[19] Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong,[20] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[21] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[22]

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.[23]

Robot ethics[edit]

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.[24] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software.[25] Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.

Ethics principles of artificial intelligence[edit]

In the review of 84[26] ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity.[26]

Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficencenon-maleficenceautonomy and justice) and an additional AI enabling principle – explicability.[27]

Transparency, accountability, and open source[edit]

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[28] Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.[29] OpenAI is a non-profit AI research company created by Elon MuskSam Altman and others to develop open-source AI beneficial to humanity.[30] There are numerous other open-source AI developments.

Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE Standards Association has published a technical standard on Transparency of Autonomous Systems: IEEE 7001-2021.[31] The IEEE effort identifies multiple scales of transparency for different stakeholders. Further, there is concern that releasing the full capacity of contemporary AI to some organizations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.[32]

Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.[33] The OECDUNEU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.[34][35][36]

On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence".[37] This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.[38] To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.[39] On 21 April 2021, the European Commission proposed the Artificial Intelligence Act.[40]

Ethical challenges[edit]

Biases in AI systems[edit]

0:56
Then-US Senator Kamala Harris speaking about racial bias in artificial intelligence in 2020

AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases.[41][42][43][44] For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender;[45] these AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.[46] Furthermore, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.[47]

Bias can creep into algorithms in many ways. The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system. For instance, Amazon's AI-powered recruitment tool was trained with its own recruitment data accumulated over the years, during which time the candidates that successfully got the job were mostly white males. Consequently, the algorithms learned the (biased) pattern from the historical data and generated predictions for the present/future that these types of candidates are most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turn out to be biased against female and minority candidates. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.[48] In natural language processing, problems can arise from the text corpus — the source material the algorithm uses to learn about the relationships between different words.[49]

Large companies such as IBM, Google, etc. have made efforts to research and address these biases.[50][51][52] One solution for addressing bias is to create documentation for the data used to train AI systems.[53][54] Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.[55]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it.[56] There are some open-sourced tools[57] by civil societies that are looking to bring more awareness to biased AI.

Robot rights[edit]

"Robot rights" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights.[58] It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.[59] These could include the right to life and liberty, freedom of thought and expression, and equality before the law.[60] The issue has been considered by the Institute for the Future[61] and by the U.K. Department of Trade and Industry.[62]

Experts disagree on how soon specific and detailed laws on the subject will be necessary.[62] Glenn McGee reported that sufficiently humanoid robots might appear by 2020,[63] while Ray Kurzweil sets the date at 2029.[64] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[65]

The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:

61. If in any given year, a publicly available open-source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[66]

In October 2017, the android Sophia was granted citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition.[67] Some saw this gesture as openly denigrating of human rights and the rule of law.[68]

The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.

Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[69]

Artificial suffering[edit]

In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may became conscious, such as Integrated information theory. Edelman notes one exception had been Thomas Metzinger, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "explosion of artificial suffering", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of artificial conscious instances. Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged. These include OpenAI founder Ilya Sutskever in February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022, David Chalmers argued that it was unlikely current large language models like GPT-3 had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future.[70][71][72]

Threat to human dignity[edit]

Joseph Weizenbaum[73] argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:

  • A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
  • A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
  • A soldier
  • A judge
  • A police officer
  • A therapist (as was proposed by Kenneth Colby in the 70s)

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[74]

Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[74] However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[75]

Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life.[73]

AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard[76] writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

Liability for self-driving cars[edit]

As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.[77][78] Recently,[when?] there has been debate as to the legal liability of the responsible party if these cars get into accidents.[79][80] In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.[81]

In another incident on March 18, 2018, Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.[82]

Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.[83][failed verification] Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.[84][85][86]

Weaponization of artificial intelligence[edit]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomy.[11][87] On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[88] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[89][13] Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively.[90]

Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[91] From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.[92]

There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and South Korea[93] respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition[94] to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[95]

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[96]

Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.[95]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".[97]

Opaque algorithms[edit]

Approaches like machine learning with neural networks can result in computers making decisions that they and the humans who programmed them cannot explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence.[98] Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do.[99]

Negligent or Deliberate Misuse of AI[edit]

A special case of the opaqueness of AI is that caused by it being anthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its moral agency. This can cause people to overlook whether either human negligence or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent digital governance regulation, such as the EU's AI Act is set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinary product liability. This includes potentially AI audits.

Singularity[edit]

Many researchers have argued that, by way of an "intelligence explosion", a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[100] In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[101]

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to "enhance" ourselves.[102]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[100][101] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[103] AI researchers such as Stuart J. Russell,[104] Bill Hibbard,[76] Roman Yampolskiy,[105] Shannon Vallor,[106] Steven Umbrello[107] and Luciano Floridi[108] have proposed design strategies for developing beneficial machines.

Institutions in AI policy & ethics[edit]

There are many organisations concerned with AI ethics and policy, public and governmental as well as corporate and societal.

AmazonGoogleFacebookIBM, and Microsoft have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[109]

The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization.

Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied.

AI ethics work is structured by personal values and professional commitments, and involves constructing contextual meaning through data and algorithms. Therefore, AI ethics work needs to be incentivized.[110]

Intergovernmental initiatives[edit]

  • The European Commission has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its "Ethics Guidelines for Trustworthy Artificial Intelligence".[111] The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020.[112] The European Commission also proposed the Artificial Intelligence Act.[40]
  • The OECD established an OECD AI Policy Observatory.[113]
  • In 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence,[114] the first global standard on the ethics of AI.[115]

Governmental initiatives[edit]

  • In the United States the Obama administration put together a Roadmap for AI Policy.[116] The Obama Administration released two prominent white papers on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST the (National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019).[117]
  • In January 2020, in the United States, the Trump Administration released a draft executive order issued by the Office of Management and Budget (OMB) on "Guidance for Regulation of Artificial Intelligence Applications" ("OMB AI Memorandum"). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill.[118]
  • The Computing Community Consortium (CCC) weighed in with a 100-plus page draft report[119] – A 20-Year Community Roadmap for Artificial Intelligence Research in the US[120]
  • The Center for Security and Emerging Technology advises US policymakers on the security implications of emerging technologies such as AI.
  • The Non-Human Party is running for election in New South Wales, with policies around granting rights to robots, animals and generally, non-human entities whose intelligence has been overlooked.[121]
  • In Russia, the first-ever Russian "Codex of ethics of artificial intelligence" for business was signed in 2021. It was driven by Analytical Center for the Government of the Russian Federation together with major commercial and academic institutions such as SberbankYandexRosatomHigher School of EconomicsMoscow Institute of Physics and TechnologyITMO UniversityNanosemanticsRostelecomCIAN and others.[122]

Academic initiatives[edit]

NGO initiatives[edit]

An international non-profit organization Future of Life Institute held a 5 day conference in Asilomar in 2017 on the subject of "Beneficial AI", the outcome of which was a set of 23 guiding principles for the future of AI research. Through a shared vision between experts and thought leaders from variety of disciplines, this conference laid an influential groundwork for AI governance principals in addressing research issues, ethics and values, and long-term issues.[132]

Private organizations[edit]

Role and impact of fiction[edit]

The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the Institut de Robòtica i Informàtica Industrial (Institute of robotics and industrial computing) at the Technical University of Catalonia notes,[136] in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.

History[edit]

Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the EnlightenmentLeibniz already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being,[137] and so does Descartes, who describes what could be considered an early version of the Turing test.[138]

The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in Mary Shelley's Frankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: R.U.R – Rossum's Universal RobotsKarel Čapek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, robota) but was also an international success after it premiered in 1921. George Bernard Shaw's play Back to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film Metropolis shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society.

Impact on technological development[edit]

While the anticipation of a future dominated by potentially indomitable technology has fueled the imagination of writers and film makers for a long time, one question has been less frequently analyzed, namely, to what extent fiction has played a role in providing inspiration for technological development. It has been documented, for instance, that the young Alan Turing saw and appreciated aforementioned Shaw's play Back to Methuselah in 1933[139] (just 3 years before the publication of his first seminal paper,[140] which laid the groundwork for the digital computer), and he would likely have been at least aware of plays like R.U.R., which was an international success and translated into many languages.

One might also ask the question which role science fiction played in establishing the tenets and ethical implications of AI development: Isaac Asimov conceptualized his Three Laws of Robotics in the 1942 short story "Runaround", part of the short story collection I, RobotArthur C. Clarke's short The Sentinel, on which Stanley Kubrick's film 2001: A Space Odyssey is based, was written in 1948 and published in 1952. Another example (among many others) would be Philip K. Dick's numerous short stories and novels – in particular Do Androids Dream of Electric Sheep?, published in 1968, and featuring its own version of a Turing Test, the Voight-Kampff Test, to gauge emotional responses of androids indistinguishable from humans. The novel later became the basis of the influential 1982 movie Blade Runner by Ridley Scott.

Science fiction has been grappling with ethical implications of AI developments for decades, and thus provided a blueprint for ethical issues that might emerge once something akin to general artificial intelligence has been achieved: Spike Jonze's 2013 film Her shows what can happen if a user falls in love with the seductive voice of his smartphone operating system; Ex Machina, on the other hand, asks a more difficult question: if confronted with a clearly recognizable machine, made only human by a face and an empathetic and sensual voice, would we still be able to establish an emotional connection, still be seduced by it? (The film echoes a theme already present two centuries earlier, in the 1817 short story The Sandmann by E. T. A. Hoffmann.)

The theme of coexistence with artificial sentient beings is also the theme of two recent novels: Machines Like Me by Ian McEwan, published in 2019, involves, among many other things, a love-triangle involving an artificial person as well as a human couple. Klara and the Sun by Nobel Prize winner Kazuo Ishiguro, published in 2021, is the first-person account of Klara, an 'AF' (artificial friend), who is trying, in her own way, to help the girl she is living with, who, after having been 'lifted' (i.e. having been subjected to genetic enhancements), is suffering from a strange illness.

TV series[edit]

While ethical questions linked to AI have been featured in science fiction literature and feature films for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series Real Humans (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series Black Mirror (2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series Osmosis (2020) and British series The One deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series Love, Death+Robots have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.[141]

Future visions in fiction and games[edit]

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.[142]

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games.[143] It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Detroit: Become Human is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic people in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.[144]

Over time, debates have tended to focus less and less on possibility and more on desirability,[145] as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.[146]

 

Artificial intelligence has been around for decades. But the scope of the conversation around AI changed dramatically last year, when OpenAI launched ChatGPT, a Large Language Model that, once prompted, can spit out almost-passable prose in a strange semblance of, well, artificial intelligence. 

Its existence has amplified a debate among scientists, executives and regulators around the harms, threats and benefits of the technology. 

Related: US Expert Warns of One Overlooked AI Risk

Now, governments are racing to pen feasible regulation, with the U.S. so far seeming to look predominantly to prominent tech CEOs for their insight into regulatory practices, rather than scientists and researchers. And companies are racing to increase the capabilities of their AI tech as the boards of nearly every industry look for ways to adopt AI.

With harms and risks of dramatic social inequity, climate impact, increased fraud, misinformation and political instability pushed to the side amidst predictions of super-intelligent AI, the ethical question comes into greater focus.

The answer to it is not surprisingly nuanced. And though there is a path forward, there remains a litany of ethical red flags regarding AI and those responsible for its creation. 

'There's going to be a hell of a lot of abuse of these technologies.'

The ethical issue intrinsic to AI has nothing to do with purported concerns of developing a world-destroying superintelligence. These fears, spouted by Elon Musk and Sam Altman, have no basis in reality, according to Suresh Venkatasubramanian, an AI researcher and professor who in 2021 served as a White House tech advisor. 

"It's a ploy by some. It's an actual belief by others. And it's a cynical tactic by even more," Venkatasubramanian told TheStreet. "It's a great degree of religious fervor sort of masked as rational thinking."

"I believe that we should address the harms that we are seeing in the world right now that are very concrete," he added. "And I do not believe that these arguments about future risks are either credible or should be prioritized over what we're seeing right now. There's no science in X risk."

Rather, the issue with AI is that there is a "significant concentration of power" within the field that could, according to Nell Watson, a leading AI researcher and ethicist, exacerbate the harms the technology is causing. 

 
Sam Altman, CEO of OpenAI, told Congress in May that 'regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.' Getty Images / Win McNamee
Sam Altman, CEO of OpenAI, told Congress in May that 'regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.' Getty Images / Win McNamee© Getty Images / Win McNamee

"There isn't a synchronicity between the ability for people to make decisions about AI systems, what those systems are doing, how they're interpreting them and what kinds of impressions these systems are making," Watson told TheStreet. 

And though normal civilians don't have any say in whether — or how — these systems get created, the vast majority of people, according to recent polling by the Institute for AI Policy, want AI development to slow down. More than 80% of those surveyed don't trust tech companies to self-regulate when it comes to AI; 82% want to slow down the development of the technology and 71% think the risks outweigh the potential rewards. 

With the power to create and deploy AI models concentrated to just a few tech giants — companies incentivized to earn revenue in order to maximize shareholder value — Watson is not optimistic that the firms deploying AI will do so responsibly. 

"Businesses can save a lot of money if they get rid of middle managers and line managers and things like that," Watson said. "The prognosis is not good. There's going to be a hell of a lot of abuse of these technologies. Not always deliberately, but simply out of complacency or out of ignorance. 

"A lot of these systems are going to end up having a terrible impact on people."

This impact is not some distant threat; it has been ongoing for years. Britain's Horizon Post Office scandal involved "dozens of people being wrongfully sent to jail by an algorithmic management system that said that they were stealing when they were not," Watson said. 

Dozens of these convictions were later overturned.

"There are real, actual harms to people from systems that are discriminatory, unsafe, ineffective, not transparent, unaccountable. That's real," Venkatasubramanian said. "We've had 10 years or more of people actually being harmed. We're not concerned about hypotheticals."

Related: Here's the Steep, Invisible Cost Of Using AI Models Like ChatGPT

Responsible AI in Big Tech

This concentration of control, according to Brian Green, an ethicist with the Institute for Technology, Ethics, & Culture, is potentially dangerous considering the ethical questions at hand: rampant misinformation, data scraping and training AI models on content without notifying, crediting or compensating the original creator. 

"There are lots of things to be worried about because there are just so many things that can go wrong," Green told TheStreet. "The more power that people have, the more they can use that power for bad purposes, and they might not be intending to use it for that; it might just happen as a side effect."

Though he recognized that there is a long way to go, Green, who co-authored a handbook on ethics in emerging technology, is optimistic that if companies start handling small ethical tasks, it will prepare everyone to handle larger issues (such as economic disruption) when those issues come to hand. 

If the firms behind AI start thinking intentionally about ethics, striving to make "AI that's more fair, that's more inclusive, that's safer, that's more secure, that's more private, then that should get them prepared to take on any big issues in the future," Green said. "If you're doing these small things well, you should be able to do the big things well, also."

This effort, according to Watson, needs to go beyond mere ethical intentions; it ought to involve the combination of ethics with AI safety work to prevent some of "the worst excesses" of these models. 

 
'We are on the event horizon of the black hole that is artificial superintelligence,' Musk said in May. The Washington Post/Getty Images
'We are on the event horizon of the black hole that is artificial superintelligence,' Musk said in May. The Washington Post/Getty Images© The Washington Post/Getty Images

"The people who are impacted should have a say in how it gets implemented and developed," Venkatasubramanian said. "It absolutely can be done. But we need to make it happen. It's not going to happen by accident."

The regulatory approach

Citing the importance of clear, actionable regulation to guarantee that the companies developing these technologies engage them responsibly, Watson's greatest hope is that alignment comes easily and regulation comes quickly. Her greatest fear is that the congressional approach to AI might mimic that of the congressional approach to carbon emissions and the environment. 

"There was a point where everybody, liberal, conservative, could agree this was a good thing," Watson said. "And then it became politicized and it died. The same thing could very easily happen with AI ethics and safety."

Related: Some of the laws to regulate AI are already in place, expert argues

Green, though optimistic, was likewise of the opinion that people, from those artists impacted by generative AI, to the companies developing it, to the lawmakers in Washington, must actively work to ensure this technology is equitable. 

"You really need either some kind of strong social movement towards doing it or you need government regulation," Green said. "If every consumer said 'I'm not going to use a product from this company until they get their act together, ethically,' then it would work."

A growing concern around regulation, however, specifically that which might limit the kind or quantity of data that AI companies could scrape, is that it would further cement Big Tech's lead over any smaller startups. 

Amazon,  (AMZN) - Get Free Report, Google  (GOOGL) - Get Free Report and Apple  (AAPL) - Get Free Report "have all the data. They don't have to share it with anybody. How do we ever catch up?" Diana Lee, co-founder and CEO of Constellation, an automated marketing firm, told TheStreet. "When it comes to information that's on the web that's publicly traded information, we feel like that's already ethical because it's already out there."

Others, such as Microsoft  (MSFT) - Get Free Report, have often discussed the importance of striking a "better balance between regulation and innovation." 

But these recurring fears of hindering innovation, Venkatasubramanian said, have yet to be legitimately expounded upon, and to him, hold little water. The same executives who have highlighted fears of a regulatory impact on innovation have done little to explain how regulation could hurt innovation. 

"All I can hear is 'we want to conduct business as usual,'" he said. "It's not a balance."

The important thing now, Venkatasubramanian said, is for regulators to avoid the "trap of thinking there's only one thing to do. There are multiple things to do."

Chief among them is clear, enforceable regulation. Venkatasubramanian co-authored the White House's Blueprint for an AI Bill of Rights, which he said could easily be adopted into regulation. The Bill of Rights lays out a series of principles — safe and effective systems, discrimination protections, data privacy, notice and explanation and human alternatives — designed to protect people from AI harm. 

 
Senate Majority Leader Chuck Schumer (D-N.Y.) hosted prominent tech executives at the first AI forum Sept. 13. The Washington Post/Getty Images
Senate Majority Leader Chuck Schumer (D-N.Y.) hosted prominent tech executives at the first AI forum Sept. 13. The Washington Post/Getty Images© The Washington Post/Getty Images

"It is really important that Congress pays attention not just to AI as generative AI but AI broadly," he said. "Everyone's thinking about ChatGPT; it'd be really terrible if all the legislation that gets proposed only focuses on generative AI. 

"All the harms that we're talking about will exist even without generative AI."

Related: Why ChatGPT Can't Turn Into Marvel Villain Ultron ... Yet

Chuck Schumer's AI Forums

In an effort to better inform Congress about a constantly evolving technological landscape, Senate Majority Leader Chuck Schumer (D-N.Y.) hosted the first of a series of nine AI forums Sept. 13. Musk, Altman, Bill Gates and executives from companies ranging from Google  (GOOGL) - Get Free Report to Nvidia  (NVDA) - Get Free Report were present at the meeting, a fact that garnered wide-spread criticism for appearing to focus regulatory attention on those who stand to benefit from the technology, rather than those impacted by or studying it. 

"I think they missed an opportunity because everyone pays attention to the first one. They made a very clear statement," Venkatasubramanian said. "And I think it is important, critically important, to hear from the people who are actually impacted. And I really, really hope that the future forums do that."

The executives behind the companies building and deploying these models, Venkatasubramanian added, don't seem to understand what they're creating. Some, including Musk and Altman, have "very strange ideas about what we should be concerned about. These are the folks Congress is hearing from."

The path toward a positive AI future

While the harms and risks remain incontrovertible, artificial intelligence could lead to massive societal improvements. As Gary Marcus, a leading AI researcher, has said, AI, properly leveraged, can help scientists across all fields solve problems and gain understanding at a faster rate. Medicines can be discovered and produced more quickly. 

The tech can even be used to help greater understand and mitigate some impacts of climate change by allowing scientists to better collate data in order to discover predictive trends and patterns. 

Current systems —LLMs like ChatGPT — however, "are not going to reinvent material science and save the climate," Marcus told the New York Times in May. "I feel that we are moving into a regime where the biggest benefit is efficiency. These tools might give us tremendous productivity benefits but also destroy the fabric of society."

Further, Venkatasubramanian said, there is a growing list of incredible innovations happening in the field around building responsible AI, innovating methods of auditing AI systems, building instruments to examine systems for disparities and building explainable models. 

These "responsible" AI innovations are vital to get to a positive future where AI can be appropriately leveraged in a net-beneficial way, Venkatasubramanian said. 

"Short term, we need laws, regulations, we need this now. What that will trigger in the medium term is market creation; we're beginning to see companies form that offer responsible AI as a service, auditing as a service," he said. "The laws and regulations will create a demand for this kind of work."

The longer-term change that Venkatasubramanian thinks must happen, though, is a cultural one. And this shift might take a few years.

"We need people to deprogram themselves from the whole, 'move fast and break things' attitude that we've had so far. People need to change their expectations," he said. "That culture change will take time because you create the laws, the laws create the market demand, that creates the need for jobs and skills which changes the educational process.

"So you see a whole pipeline playing out on different time scales. That's what I want to see. I think it's entirely doable. I think this can happen. We have the code, we have the knowledge. We just have to have the will to do it."

If you work in artificial intelligence, contact Ian by email ian.krietzberg@thearenagroup.net or Signal 732-804-1223

Action Alerts PLUS offers expert portfolio guidance to help you make informed investing decisions. Sign up now.

google-site-verification=ggge3DPLq_ejdnGjVpjUlsZRzqijY9n34uoLZmNxUXc

-------------------------------------------------------------------------

Can't fixstupid  but  MrFixIt  does  FIX  the  PROBLEM !

MrFIxIt.ai

For the advancement of MrFixIt.AI and a virtual ChatBot and 

.Ai  &  MrFixIt Virtual Animated Avitars

TJ@MrFixIt.Ai

TJ Hammons
107 1/2 East Main Street
Norman, Oklahoma    73069
405-215-5985