Mr Fix It AMERICA
MrFixIt -MrFixItAmerica
www.MrFixIt.Ai
TJ@MrFixIt.Ai - 405-215-5985
MrFIxIt Artificial Intelligence
Science & Tech

artificial intelligence

    
Also known as: AI
Top Questions

What is artificial intelligence?

Are artificial intelligence and machine learning the same?

What is the impact of artificial intelligence (AI) on society?

artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—such as discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match full human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, voice or handwriting recognition, and chatbots.

(Read Ray Kurzweil’s Britannica essay on the future of “Nonbiological Man.”)

 

What is intelligence?

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is usually not taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger waspSphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of Sphex—must include the ability to adapt to new circumstances.

(Read Yuval Noah Harari’s Britannica essay on the future of “Nonconscious Man.”

 

Psychologists generally characterize human intelligence not by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solvingperception, and using language.

 
 

Learning

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the “add ed” rule and so form the past tense of jump based on experience with similar verbs.

Reasoning

To reason is to draw inferences appropriate to the situation. Inferences are classified as either deductive or inductive. An example of the former is, “Fred must be in either the museum or the café. He is not in the café; therefore he is in the museum,” and of the latter, “Previous accidents of this sort were caused by instrument failure; therefore this accident was caused by instrument failure.” The most significant difference between these forms of reasoning is that in the deductive case the truth of the premises guarantees the truth of the conclusion, whereas in the inductive case the truth of the premise lends support to the conclusion without giving absolute assurance. Inductive reasoning is common in science, where data are collected and tentative models are developed to describe and predict future behaviour—until the appearance of anomalous data forces the model to be revised. Deductive reasoning is common in mathematics and logic, where elaborate structures of irrefutable theorems are built up from a small set of basic axioms and rulesThere has been considerable success in programming computers to draw inferences. However, true reasoning involves more than just drawing inferences: it involves drawing inferences relevant to the solution of the particular task or situation. This is one of the hardest problems confronting AI.

 

Problem solving

Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a particular problem and often exploits very specific features of the situation in which the problem is embedded. In contrast, a general-purpose method is applicable to a wide variety of problems. One general-purpose technique used in AI is means-end analysis—a step-by-step, or incremental, reduction of the difference between the current state and the final goal. The program selects actions from a list of means—in the case of a simple robot this might consist of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT—until the goal is reac

Many diverse problems have been solved by artificial intelligence programs. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs, and manipulating “virtual objects” in a computer-generated world.

 

Perception

In perception the environment is scanned by means of various sensory organs, real or artificial, and the scene is decomposed into separate objects in various spatial relationships. Analysis is complicated by the fact that an object may appear different depending on the angle from which it is viewed, the direction and intensity of illumination in the scene, and how much the object contrasts with the surrounding field.

One of the earliest systems to integrate perception and action was FREDDY, a stationary robot with a moving television eye and a pincer hand, constructed at the University of Edinburgh, Scotland, during the period 1966–73 under the direction of Donald Michie. FREDDY was able to recognize a variety of objects and could be instructed to assemble simple artifacts, such as a toy car, from a random heap of components. At present, artificial perception is sufficiently advanced to enable optical sensors to identify individuals and autonomous vehicles to drive at moderate speeds on the open road.

 

Language

language is a system of signs having meaning by convention. In this sense, language need not be confined to the spoken word. Traffic signs, for example, form a mini-language, it being a matter of convention that ⚠ means “hazard ahead” in some countries. It is distinctive of languages that linguistic units possess meaning by convention, and linguistic meaning is very different from what is called natural meaning, exemplified in statements such as “Those clouds mean rain” and “The fall in pressure means the valve is malfunctioning.”An important characteristic of full-fledged human languages—in contrast to birdcalls and traffic signs—is their productivity. A productive language can formulate an unlimited variety of sentences.

Large language models like ChatGPT can respond fluently in a human language to questions and statements. Although such models do not actually understand language as humans do but merely select words that are more probable than others, they have reached the point where their command of a language is indistinguishable from that of a normal human. What, then, is involved in genuine understanding, if even a computer that uses language like a native human speaker is not acknowledged to understand? There is no universally agreed upon answer to this difficult question.

 

 

 

Methods and goals in AI

 

Symbolic vs. connectionist approaches

AI research follows two distinct, and to some extent competing, methods, the symbolic (or “top-down”) approach, and the connectionist (or “bottom-up”) approach. The top-down approach seeks to replicate intelligence by analyzing cognition independent of the biological structure of the brain, in terms of the processing of symbols—whence the symbolic label. The bottom-up approach, on the other hand, involves creating artificial neural networks in imitation of the brain’s structure—whence the connectionist label.

To illustrate the difference between these approaches, consider the task of building a system, equipped with an optical scanner, that recognizes the letters of the alphabet. A bottom-up approach typically involves training an artificial neural network by presenting letters to it one by one, gradually improving performance by “tuning” the network. (Tuning adjusts the responsiveness of different neural pathways to different stimuli.) In contrast, a top-down approach typically involves writing a computer program that compares each letter with geometric descriptions. Simply put, neural activities are the basis of the bottom-up approach, while symbolic descriptions are the basis of the top-down approach.

In The Fundamentals of Learning (1932), Edward Thorndike, a psychologist at Columbia UniversityNew York City, first suggested that human learning consists of some unknown property of connections between neurons in the brain. In The Organization of Behavior (1949), Donald Hebb, a psychologist at McGill University, Montreal, Canada, suggested that learning specifically involves strengthening certain patterns of neural activity by increasing the probability (weight) of induced neuron firing between the associated connections. The notion of weighted connections is described in a later section, Connectionism.

 

 

 

Artificial intelligence (AI)

is the intelligence of machines or software, as opposed to the intelligence of human beings or animals.

AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTubeAmazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), and competing at the highest level in strategic games (such as chess and Go).[1]

Artificial intelligence was founded as an academic discipline in 1956.[2] The field went through multiple cycles of optimism[3][4] followed by disappointment and loss of funding.[5][6] After 2012, deep learning surpassed all previous AI techniques,[7] leading to a vast increase in funding and interest.

The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoningknowledge representationplanninglearningnatural language processingperception, and support for robotics.[a] General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals.[8] To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statisticsprobability, and economics. AI also draws upon psychologylinguisticsphilosophyneuroscience and many other fields.[9]

Goals

The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]

Reasoning, problem-solving

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[10] By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics.[11]

Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": they became exponentially slower as the problems grew larger.[12] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[13] Accurate and efficient reasoning is an unsolved problem.

Knowledge representation

 

 

 

 

 

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concept

 

 

 

 

Knowledge representation and knowledge engineering[14] allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[15] scene interpretation,[16] clinical decision support,[17] knowledge discovery (mining "interesting" and actionable inferences from large databases),[18] and other areas.[19]

knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by domain of knowledge.[20] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge and act as mediators between domain ontologies that cover specific knowledge about a particular domain (field of interest or area of concern).

Knowledge bases need to represent things such as: objects, properties, categories and relations between objects; [21] situations, events, states and time;[22] causes and effects;[23] knowledge about knowledge (what we know about what other people know);[24] default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing);[25] and many other aspects and domains of knowledge.

Among the most difficult problems in KR are: the breadth of commonsense knowledge (the set of atomic facts that the average person knows) is enormous;[26] the difficulty of knowledge acquisition and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).[13]

Planning and decision making

Automated planning[27] and automated decision making[28] are part of AI.

Learning

Machine learning is the study of programs that can improve their performance on a given task automatically.[29] It has been a part of AI from the beginning.[b]

There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance.[32] Supervised learning requires a human to label the input data first, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input).[33] In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".[34] Transfer learning is when the knowledge gained from one problem is applied to a new problem.[35] Deep learning uses artificial neural networks for all of these types of learning.

Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[36]

Natural language processing

Natural language processing (NLP)[37] allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognitionspeech synthesismachine translationinformation extractioninformation retrieval and question answering.[38]

Early work, based on Noam Chomsky's generative grammar, had difficulty with word-sense disambiguation[c] unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem[26]).

Modern deep learning techniques for NLP include word embedding (how often one word appears near another),[39] transformers (which finds patterns in text),[40] and others.[41] In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text,[42][43] and by 2023 these models were able to get human-level scores on the bar examSATGRE, and many other real-world applications.[44]

Perception

 

 

 

 

 

Feature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw d

 

 

 

 

 

Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input.[45] The field includes speech recognition,[46] image classification,[47] facial recognitionobject recognition,[48] and robotic perception.[49]

Robotics

Robotics[50] uses AI.

Social intelligence

 

 

 

 

 

Kismet, a robot with rudimentary social skills[51]

 

 

 

 

Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process or simulate human feeling, emotion and mood.[52] For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction. However, this tends to give naïve users an unrealistic conception of how intelligent existing computer agents actually are.[53] Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the affects displayed by a videotaped subject.[54]

General intelligence

A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence.[8]

Tools

Search and optimization

AI can solve many problems by intelligently searching through many possible solutions.[55] There are two very different kinds of search used in AI: state space search and local search.

State space search

State space search searches through a tree of possible states to try to find a goal state.[56] For example, Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[57]

Simple exhaustive searches[58] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes.[12] "Heuristics" or "rules of thumb" can help to prioritize choices that are more likely to reach a goal.[59]

Adversarial search is used for game-playing programs, such as chess or go. It searches through a tree of possible moves and counter-moves, looking for a winning position.[60]

Local search

 

 

 

 

 

particle swarm seeking the global minimum

 

 

 

 

 

 

Local search uses mathematical optimization to find a numeric solution to a problem. It begins with some form of a guess and then refines the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. This process is called stochastic gradient descent.[61]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses).[62]

Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[63]

Neural networks and statistical classifiers (discussed below), also use a form of local search, where the "landscape" to be searched is formed by learning.

Logic

Formal Logic is used for reasoning and knowledge representation.[64] Formal logic comes in two main forms: propositional logic (which operates on statements that are true of false and uses logical connectives such as "and", "or", "not" and "implies")[65] and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys").[66]

Logical inference (or deduction) is the process of proving a new statement (conclusion) from other statements that are already known to be true (the premises).[67] A logical knowledge base also handles queries and assertions as a special case of inference.[68] An inference rule describes what is a valid step in a proof. The most general inference rule is resolution.[69] Inference can be reduced to performing a search to find a path that leads from premises to conclusions, where each step is the application of an inference rule.[70] Inference performed this way is intractable except for short proofs in restricted domains. No efficient, powerful and general method has been discovered.[71]

Fuzzy logic assigns a "degree of truth" between 0 and 1 and handles uncertainty and probabilistic situations.[72] Non-monotonic logics are designed to handle default reasoning.[25] Other specialized versions of logic have been developed to describe many complex domains (see knowledge representation above).

Probabilistic methods for uncertain reasoning

 

 

 

 

 

Expectation-maximization clustering of Old Faithful eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.

 

 

 

 

Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics.[73]

Bayesian networks[74] are a very general tool that can be used for many problems, including reasoning (using the Bayesian inference algorithm),[d][76] learning (using the expectation-maximization algorithm),[e][78] planning (using decision networks)[79] and perception (using dynamic Bayesian networks).[80]

Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[80]

Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theorydecision analysis,[81] and information value theory.[82] These tools include models such as Markov decision processes[83] dynamic decision networks,[80] game theory and mechanism design.[84]

Classifiers and statistical learning methods

The simplest AI applications can be divided into two types: classifiers (e.g. "if shiny then diamond"), on one hand, and controllers (e.g. "if diamond then pick up"), on the other hand. Classifiers[85] are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[33]

There many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm.[86] K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.[87] The naive Bayes classifier is reportedly the "most widely used learner"[88] at Google, due in part to its scalability.[89] Neural networks are also used as classifiers.[90]

Artificial neural networks

 

 

 

 

 

A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.

 

 

 

 

Artificial neural networks[90] were inspired by the design of the human brain: a simple "neuron" N accepts input from other neurons, each of which, when activated (or "fired"), casts a weighted "vote" for or against whether neuron N should itself activate. In practice, the "neurons" are a list of numbers, the weights are matrixes, and learning is performed by linear algebra operations on the matrixes and vectors. Neural networks perform a type of mathematical optimization -- they perform stochastic gradient descent on a multi-dimensional topology that is created by training the network.[f]

Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function.[92] The most common training technique is the backpropagation algorithm.[93] The earliest learning technique for neural networks was Hebbian learning ("fire together, wire together").[94]

In feedforward neural networks the signal passes in only one direction.[95] Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events.[96] Perceptrons[97] use only a single layer of neurons, deep learning[98] uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other -- this especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object.[99]

Deep learning

 

 

 

 

 

Representing Images on Multiple Layers of Abstraction in Deep Learning

Representing images on multiple layers of abstraction in deep learning[100]

 

 

 

 

Deep learning[98] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.[101]

Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including computer visionspeech recognitionimage classification[102] and others.

Specialized hardware and software

In late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software, had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training.[103]

Historically, specialized languages, such as LispProlog, and others, had been used.

Applications

 

 

 

 

 

For this 2018 project of the artist Joseph Ayerle the AI had to learn the typical patterns in the colors and brushstrokes of Renaissance painter Raphael. The portrait shows the face of the actress Ornella Muti, "painted" by AI in the style of Raphael.

 

 

 

 

AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements,[104] recommendation systems (offered by NetflixYouTube or Amazon), driving internet traffic,[105][106] targeted advertising (AdSenseFacebook), virtual assistants (such as Siri or Alexa),[107] autonomous vehicles (including dronesADAS and self-driving cars), automatic language translation (Microsoft TranslatorGoogle Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace) and image labeling (used by FacebookApple's iPhoto and TikTok).

There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported they had incorporated "AI" in some offerings or processes.[108] A few examples are energy storage,[109] medical diagnosis, military logistics, applications that predict the result of judicial decisions,[110] foreign policy,[111] or supply chain management.

Game playing programs have been used since the 1950s demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.[112] In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering systemWatson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[113] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[114] Other programs handle imperfect-information games; such as for poker at a superhuman level, Pluribus[g] and Cepheus.[116] DeepMind in the 2010s developed a "generalized artificial intelligence" that could learn many diverse Atari games on its own.[117]

In the early 2020s, generative AI gained widespread prominence. ChatGPT, based on GPT-3, and other large language models, were tried by 14% of Americans adults.[118] The increasing realism and ease-of-use of AI-based text-to-image generators such as MidjourneyDALL-E, and Stable Diffusion[119][120] sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat,[121] the fictional arrest of Donald Trump,[122] and a hoax of an attack on the Pentagon,[123] as well as the usage in professional creative arts.[124][125]

AlphaFold 2 (2020), which demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.[126]

Ethics

Risks and harm

Algorithmic bias

AI programs can become biased after learning from real-world data.[127] It may not be introduced by the system designers but learned by the program, and thus the programmers may not be aware that the bias exists.[128] Bias can be inadvertently introduced by the way training data is selected and by the way a model is deployed.[129] [127] It can also emerge from correlations: AI is used to classify individuals into groups and then make predictions assuming that the individual will resemble other members of the group. In some cases, this assumption may be unfair.[130] An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivistProPublica claims that the COMPAS-assigned recidivism risk level of black defendants is far more likely to be overestimated than that of white defendants, despite the fact that the program was not told the races of the defendants.[131]

Health equity issues may also be exacerbated when many-to-many mapping is done without taking steps to ensure equity for populations at risk for bias. At this time equity-focused tools and regulations are not in place to ensure equity application representation and usage.[132] Other examples where algorithmic bias can lead to unfair outcomes are when AI is used for credit rating, CV screening, hiring and applications for public housing.[127]

At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022) the Association for Computing Machinery, in Seoul, South Korea, presented and published findings recommending that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.[133]

Lack of transparency

Modern machine learning applications can not explain how they have reached a decision.

Bad actors and weaponized AI

AI provides a number of tools that are particularly useful for authoritarian governments: smart spywareface recognition and voice recognition allow widespread surveillance; such surveillance allows machine learning to classify potential enemies of the state and can prevent them from hiding; recommendation systems can precisely target propaganda and misinformation for maximum effect; deepfakes aid in producing misinformation; advanced AI can make centralized decision making more competitive with liberal and decentralized systems such as markets.[134]

Terrorists, criminals and rogue states may use other forms of weaponized AI such as advanced digital warfare and lethal autonomous weapons. By 2015, over fifty countries were reported to be researching battlefield robots.[135]

Machine-learning AI is also able to design tens of thousands of toxic molecules in a matter of hours.[136]

Technological unemployment

Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[137]

In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI.[138] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed.[139] Risk estimates vary; for example, in the 2010s Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".[h][141] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology (rather than social policy) creates unemployment (as opposed to redundancies).[137]

Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[142] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[143]

Copyright

In order to leverage as large a dataset as is feasible, generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under a rationale of "fair use". Experts disagree about how well, and under what circumstances, this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".[144]

Ethical machines and alignment

Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[145]

Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[146] The field of machine ethics is also called computational morality,[146] and was founded at an AAAI symposium in 2005.[147]

Other approaches include Wendell Wallach's "artificial moral agents"[148] and Stuart J. Russell's three principles for developing provably beneficial machines.[149]

Regulation

 

 

 

 

 

Sam Altman seated in front of microphone

OpenAI CEO Sam Altman testifies about AI regulation before a US Senate subcommittee, 2023

 

 

 

 

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms.[150] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[151] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[152][153] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[154] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, US and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[154] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.[154] Henry KissingerEric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[155] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[156]

In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[152] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[157] In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[158][159]

History

The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate both mathematical deduction and formal reasoning, which is known as the Church–Turing thesis.[160] This, along with at the time new discoveries in cybernetics and information theory, led researchers to consider the possibility of building an "electronic brain".[i][162] The first paper later recognized as "AI" was McCullouch and Pitts design for Turing-complete "artificial neurons" in 1943.[163]

The field of AI research was founded at a workshop at Dartmouth College in 1956.[j][2] The attendees became the leaders of AI research in the 1960s.[k] They and their students produced programs that the press described as "astonishing":[l] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.[m][3]

By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[167] and laboratories had been established around the world.[168] Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do".[169] Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[170]

They had, however, underestimated the difficulty of the problem.[n] Both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill[172] and ongoing pressure from the US Congress to fund more productive projectsMinsky's and Papert's book Perceptrons was understood as proving that artificial neural networks approach would never be useful for solving real-world tasks, thus discrediting the approach altogether.[173] The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[5]

In the early 1980s, AI research was revived by the commercial success of expert systems,[174] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.[4] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[6]

Many researchers began to doubt that the current practices would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition.[175] A number of researchers began to look into "sub-symbolic" approaches.[176] Robotics researchers, such as Rodney Brooks, rejected "representation" in general and focussed directly on engineering machines that move and survive.[o]Judea PearlLofti Zadeh and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[73][181] But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others.[182] In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.[183]

AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statisticseconomics and mathematics).[184] By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence".[185]

Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.[7] For many specific tasks, other methods were abandoned.[p] Deep learning's success was based on both hardware improvements (faster computers,[187] graphics processing unitscloud computing[188]) and access to large amounts of data[189] (including curated datasets,[188] such as ImageNet).

The machine learning achievements made it safe for media and businesses to refer to them as "AI" again. [q] The number of software projects that use machine learning at Google increased from a "sporadic usage" in 2012 to more than 2,700 projects in 2015.[188]

In a 2017 survey, one in five companies reported they had incorporated "AI" in some offerings or processes".[190] The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.[154] According to 'AI Impacts', about $50 billion annually was invested in "AI" around 2022 in the US alone and about 20% of new US Computer Science PhD graduates have specialized in "AI";[191] about 800,000 "AI"-related US job openings existed in 2022.[192]

In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study.[193]

Philosophy

Defining artificial intelligence

Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?"[194] He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour".[194] He devised the Turing test, which measures the ability of a machine to simulate human conversation.[195] Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people[r] but "it is usual to have a polite convention that everyone thinks"[196]

Russell and Norvig agree with Turing that AI must be defined in terms of "acting" and not "thinking".[197] However, they are critical that the test compares machines to people. "Aeronautical engineering texts," they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.'"[198] AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".[199]

McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world."[200] Another AI founder, Marvin Minsky similarly defines it as "the ability to solve hard problems".[201] These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.

A definition that has also been adopted by Google[202][better source needed] – major practitionary in the field of AI. This definition stipulated the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.

Evaluating approaches to AI

No established unifying theory or paradigm has guided AI research for most of its history.[s] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolicneatsoft and narrow (see below). Critics argue that these questions may have to be revisited by future generations of AI researchers.

Symbolic AI and its limits

Symbolic AI (or "GOFAI")[204] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[205]

However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.[206] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.[207] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree.[t][13]

The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[209][210] in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.

Neat vs. scruffy

"Neats" hope that intelligent behavior is described using simple, elegant principles (such as logicoptimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely only on incremental testing to see if they work. This issue was actively discussed in the 70s and 80s,[211] but eventually was seen as irrelevant. In the 1990s mathematical methods and solid scientific standards became the norm, a transition that Russell and Norvig termed in 2003 as "the victory of the neats".[212] However in 2020 they wrote "deep learning may represent a resurgence of the scruffies".[175] Modern AI has elements of both.

Soft vs. hard computing

Finding a provably correct or optimal solution is intractable for many important problems.[12] Soft computing is a set of techniques, including genetic algorithmsfuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 80s and most successful AI programs in the 21st century are examples of soft computing with neural networks.

Narrow vs. general AI

AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence (general AI) directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[213][214] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The experimental sub-field of artificial general intelligence studies this area exclusively.

Machine consciousness, sentience and mind

The philosophy of mind does not know whether a machine can have a mindconsciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[215] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.

Consciousness

David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[216] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). Human information processing is easy to explain, however, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like.[217]

Computationalism and functionalism

Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.[218]

Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[u] Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind.[222]

Robot rights

If a machine has a mind and subjective experience, then it may also have sentience (the ability to feel), and if so it could also suffer; it has been argued that this could entitle it to certain rights.[223] Any hypothetical robot rights would lie on a spectrum with animal rights and human rights.[224] This issue has been considered in fiction for centuries,[225] and is now being considered by, for example, California's Institute for the Future; however, critics argue that the discussion is premature.[226]

Future

Superintelligence and the singularity

superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[214]

If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity".[227] However, most technologies (such as transportation) do not improve exponentially indefinitely, but rather follow an S-curve, slowing when they reach the physical limits of what the technology can do.[228]

Existential risk

It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as the physicist Stephen Hawking puts it, "spell the end of the human race".[229] According to the philosopher Nick Bostrom, for almost any goals that a sufficiently intelligent AI may have, it is instrumentally incentivized to protect itself from being shut down and to acquire more resources, as intermediary steps to better achieve these goals. Sentience or emotions are then not required for an advanced AI to be dangerous. In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side".[230] The political scientist Charles T. Rubin argued that "any sufficiently advanced benevolence may be indistinguishable from malevolence" and warned that we should not be confident that intelligent machines will by default treat us favorably.[231]

The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[232] Personalities such as Stephen HawkingBill GatesElon Musk have expressed concern about existential risk from AI.[233] In 2023, AI pioneers including Geoffrey HintonYoshua BengioDemis Hassabis, and Sam Altman issued the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war"; some others such as Yann LeCun consider this to be unfounded.[234] Mark Zuckerberg said that AI will "unlock a huge amount of positive things", including curing diseases and improving the safety of self-driving cars.[235] Some experts have argued that the risks are too distant in the future to warrant research, or that humans will be valuable from the perspective of a superintelligent machine.[236] Rodney Brooks, in particular, said in 2014 that "malevolent" AI is still centuries away.[v]

Transhumanism

Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.[238]

Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998.[239]

In fiction

 

 

 

 

 

The word "robot" itself was coined by Karel Čapek in his 1921 play R.U.R., the title standing for "Rossum's Universal Robots".

 

 

 

 

Thought-capable artificial beings have appeared as storytelling devices since antiquity,[240] and have been a persistent theme in science fiction.[241]

A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.[242]

Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics;[243] while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[244]

Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[245]

See also

google-site-verification=ggge3DPLq_ejdnGjVpjUlsZRzqijY9n34uoLZmNxUXc

-------------------------------------------------------------------------

Can't fixstupid  but  MrFixIt  does  FIX  the  PROBLEM !

MrFIxIt.ai

For the advancement of MrFixIt.AI and a virtual ChatBot and 

.Ai  &  MrFixIt Virtual Animated Avitars

TJ@MrFixIt.Ai

TJ Hammons
107 1/2 East Main Street
Norman, Oklahoma    73069
405-215-5985