/Artificial-Intelligence

"The real problem is not the existential threat of AI. Instead, it is in the development of ethical AI systems." ― Rana El Kaliouby

Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behaviour.


Contents



What is Artificial Intelligence?


Types of Artificial Intelligence

  • Weak AI (narrow AI) – non-sentient machine intelligence, typically focused on a narrow task (narrow AI).
  • Strong AI / artificial general intelligence (AGI) – (hypothetical) machine with the ability to apply intelligence to any problem, rather than just one specific problem, typically meaning "at least as smart as a typical human". Its future potential creation is referred to as a technological singularity, and constitutes a global catastrophic risk.
  • Superintelligence – (hypothetical) artificial intelligence far surpassing that of the brightest and most gifted human minds. Due to recursive self-improvement, superintelligence is expected to be a rapid outcome of creating artificial general intelligence.

Approaches to AI


Timeline of Artificial Intelligence


Antiquity Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent robots (such as Talos) and artificial beings (such as Galatea and Pandora).
Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it." Mosaic law prohibits the use of automatons in religion.
10th century BC Yan Shi presented King Mu of Zhou with mechanical men.
384 BC–322 BC Aristotle described the syllogism, a method of formal, mechanical thought and theory of knowledge in The Organon.
1st century Heron of Alexandria created mechanical men and other automatons.
260 Porphyry of Tyros wrote Isagogê which categorized knowledge and logic.
~800 Geber developed the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory, up to and including human life.
1206 Al-Jazari created a programmable orchestra of mechanical human beings.
1275 Ramon Llull, Spanish theologian, invents the Ars Magna, a tool for combining concepts mechanically, based on an Arabic astrological tool, the Zairja. The method would be developed further by Gottfried Leibniz in the 17th century.
~1500 Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy.
~1580 Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life.
Early 17th century René Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").
1620 Sir Francis Bacon developed empirical theory of knowledge and introduced inductive logic in his work The New Organon, a play on Aristotle's title The Organon.
1623 Wilhelm Schickard drew a calculating clock on a letter to Kepler. This will be the first of five unsuccessful attempts at designing a direct entry calculating clock in the 17th century (including the designs of Tito BurattiniSamuel Morland and René Grillet).
1641 Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".
1642 Blaise Pascal invented the mechanical calculator, the first digital calculating machine.
1672 Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division. He also invented the binary numeral system and envisioned a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. Leibniz worked on assigning a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.
1726 Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations " by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study." The machine is a parody of Ars Magna, one of the inspirations of Gottfried Leibniz' mechanism.
1750 Julien Offray de La Mettrie published L'Homme Machine, which argued that human thought is strictly mechanical.
1769 Wolfgang von Kempelen built and toured with his chess-playing automatonThe Turk. The Turk was later shown to be a hoax, involving a human chess player.
1818 Mary Shelley published the story of Frankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creating sentient beings.
1822–1859 Charles Babbage and Ada Lovelace worked on programmable mechanical calculating machines.
1837 The mathematician Bernard Bolzano made the first modern attempt to formalize semantics.
1854 George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra.
1863 Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.
1913 Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic.
1915 Leonardo Torres y Quevedo built a chess automaton, El Ajedrecista, and published speculation about thinking and automata.
1923 Karel Čapek's play R.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English.
1920s and 1930s Ludwig Wittgenstein and Rudolf Carnap led philosophy into logical analysis of knowledgeAlonzo Church developde Lambda Calculus to investigate computability using recursive functional notation.
1931 Kurt Gödel showed that sufficiently powerful formal systems, if consistent, permit the formulation of true theorems that are unprovable by any theorem-proving machine deriving all possible theorems from the axioms. To do this he had to build a universal, integer-based programming language, which is the reason why he is sometimes called the "father of theoretical computer science".
1940 Edward Condon displays Nimatron, a digital computer that played Nim perfectly.
1941 Konrad Zuse built the first working program-controlled computers.
1943 Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks.
Arturo RosenbluethNorbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name published in 1948.
1945 Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper, Theory of Games and Economic Behavior by mathematician John von Neumann and economist Oskar Morgenstern.
Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities.
1948 John von Neumann (quoted by E.T. Jaynes) in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer.

 

1950 Alan Turing proposes the Turing Test as a measure of machine intelligence.
Claude Shannon published a detailed analysis of chess playing as search.
Isaac Asimov published his Three Laws of Robotics.
1951 The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
1952–1962 Arthur Samuel (IBM) wrote the first game-playing program, for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur. His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play.
1956 The Dartmouth College summer AI conference is organized by John McCarthyMarvin MinskyNathan Rochester of IBM and Claude Shannon. McCarthy coins the term artificial intelligence for the conference.
The first demonstration of the Logic Theorist (LT) written by Allen NewellJ.C. Shaw and Herbert A. Simon (Carnegie Institute of Technology, now Carnegie Mellon University or CMU). This is often called the first AI program, though Samuel's checkers program also has a strong claim.
1958 John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language.
Herbert Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases.
Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence.
1959 The General Problem Solver (GPS) was created by Newell, Shaw and Simon while at CMU.
John McCarthy and Marvin Minsky founded the MIT AI Lab.
Late 1950s, early 1960s Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation.

 

1960s Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction.
1960 Man-Computer Symbiosis by J.C.R. Licklider.
1961 James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level.
In Minds, Machines and GödelJohn Lucas denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior.
Unimation's industrial robot Unimate worked on a General Motors automobile assembly line.
1963 Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests.
Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence.
Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt.
1964 Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly.
Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems.
1965 Lotfi Zadeh at U.C. Berkeley publishes his first paper introducing fuzzy logic "Fuzzy Sets" (Information and Control 8: 338–353).
J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language.
Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed.
Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system.
1966 Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets.
Machine Intelligence workshop at Edinburgh – the first of an influential annual series organized by Donald Michie and others.
Negative report on machine translation kills much work in Natural language processing (NLP) for many years.
Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning.
1968 Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics.
Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing programMacHack, that was good enough to achieve a class-C rating in tournament play.
Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian Minimum Message Length criterion, a mathematical realisation of Occam's razor.
1969 Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotionperception and problem solving.
Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner.
Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge.
First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford.
Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure, and This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. Nevertheless, significant progress in the field continued (see below).
McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".

 

Early 1970s Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI.
1970 Seppo Linnainmaa publishes the reverse mode of automatic differentiation. This method became later known as backpropagation, and is heavily used to train artificial neural networks.
Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge.
Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding.
Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks.
1971 Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English.
Work on the Boyer-Moore theorem prover started in Edinburgh.
1972 Prolog programming language developed by Alain Colmerauer.
Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS.
1973 The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. (Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.)
The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities.
1974 Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems.
1975 Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems.
Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan.
Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together.
The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal.
Mid-1970s Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing.
David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception.
1976 Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures).
Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford.
1978 Tom Mitchell, at Stanford, invented the concept of Version spaces for describing the search space of a concept formation program.
Herbert A. Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing".
The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments.
1979 Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells".
Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge.
Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming.
The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab.
BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion (in part via luck).
Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance.
Late 1970s Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.

 

1980s Lisp machines developed and marketed. First expert system shells and commercial applications.
1980 First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford.
1981 Danny Hillis designs the connection machine, which utilizes Parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines Corporation)
1982 The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism.
1983 John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program).
James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events.
Mid-1980s Neural Networks become widely used with the Backpropagation algorithm, also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970 and applied to neural networks by Paul Werbos.
1985 The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments).
1986 The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets.
Barbara Grosz and Candace Sidner create the first computation model of discourse, establishing the field of research.
1987 Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983).
Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI.
Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.
1989 The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.
Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network).

 

1990s Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.
Early 1990s TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players.
1991 DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.
1992 Carol Stoker and NASA Ames robotics team explore marine life in Antarctica with an undersea robot Telepresence ROV operated from the ice near McMurdo Bay, Antarctica and remotely via satellite link from Moffett Field, California.
1993 Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second).
Rodney BrooksLynn Andrea Stein and Cynthia Breazeal started the widely publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years.
ISX corporation wins "DARPA contractor of the year" for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.
1994 Lotfi Zadeh at U.C. Berkeley creates "soft computing" and builds a world network of research with a fusion of neural science and neural net systems, fuzzy set theory and fuzzy systems, evolutionary algorithms, genetic programming, and chaos theory and chaotic systems ("Fuzzy Logic, Neural Networks, and Soft Computing," Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77-84).
With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars.
English draughts (checkers) world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd highest rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever.
Cindy Mason at NASA organizes the First AAAI Workshop on AI and the Environment.
1995 Cindy Mason at NASA organizes the First International IJCAI Workshop on AI and the Environment.
"No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501 km) of the 2,849 miles (4,585 km). Throttle and brakes were controlled by a human driver.
One of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes.
1997 The Deep Blue chess machine (IBM) defeats the (then) world chess champion, Garry Kasparov.
First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators.
Computer Othello program Logistello defeated the world champion Takeshi Murakami with a score of 6–0.
1998 Tiger ElectronicsFurby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment.
Tim Berners-Lee published his Semantic Web Road map paper.
Ulises Cortés and Miquel Sànchez-Marrè organize the first Environment and AI Workshop in Europe ECAI, "Binding Environmental Sciences and Artificial Intelligence."
Leslie P. KaelblingMichael Littman, and Anthony Cassandra introduce POMDPs and a scalable method for solving them to the AI community, jumpstarting widespread use in robotics and automated planning and scheduling
1999 Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous.
Late 1990s Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web.
Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab.
Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network.

 

2000 Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers.
Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions.
The Nomad robot explores remote regions of Antarctica looking for meteorite samples.
2002 iRobot's Roomba autonomously vacuums the floor while navigating and avoiding obstacles.
2004 OWL Web Ontology Language W3C Recommendation (10 February 2004).
DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money.
NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars.
2005 Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings.
Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions.
Blue Brain is born, a project to simulate the brain at molecular detail.
2006 The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (14–16 July 2006)
2007 Philosophical Transactions of the Royal Society, B – Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection
Checkers is solved by a team of researchers at the University of Alberta.
DARPA launches the Urban Challenge for autonomous cars to obey traffic rules and operate in an urban environment.
2008 Cynthia Mason at Stanford presents her idea on Artificial Compassionate Intelligence, in her paper on "Giving Robots Compassion".
2009 Google builds autonomous car.

 

2010 Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the Computer Vision group at Microsoft Research, Cambridge.
2011 Mary Lou Maher and Doug Fisher organize the First AAAI Workshop on AI and Sustainability.
IBM's Watson computer defeated television game show Jeopardy! champions Rutter and Jennings.
2011–2014 Apple's Siri (2011), Google's Google Now (2012) and Microsoft's Cortana (2014) are smartphone apps that use natural language to answer questions, make recommendations and perform actions.
2013 Robot HRP-2 built by SCHAFT Inc of Japan, a subsidiary of Google, defeats 15 teams to win DARPA’s Robotics Challenge Trials. HRP-2 scored 27 out of 32 points in 8 tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose.
NEIL, the Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images.
2015 An open letter to ban development and use of autonomous weapons signed by HawkingMuskWozniak and 3,000 researchers in AI and robotics.
Google DeepMind's AlphaGo (version: Fan) defeated 3 time European Go champion 2 dan professional Fan Hui by 5 games to 0.
2016 Google DeepMind's AlphaGo (version: Lee) defeated Lee Sedol 4–1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016. Before the match with AlphaGo, Lee Sedol was confident in predicting an easy 5–0 or 4–1 victory.
2017 Asilomar Conference on Beneficial AI was held, to discuss AI ethics and how to bring about beneficial AI while avoiding the existential risk from artificial general intelligence.
Deepstack is the first published algorithm to beat human players in imperfect information games, as shown with statistical significance on heads-up no-limit poker. Soon after, the poker AI Libratus by different research group individually defeated each of its 4 human opponents—among the best players in the world—at an exceptionally high aggregated winrate, over a statistically significant sample. In contrast to Chess and Go, Poker is an imperfect information game.
Google DeepMind's AlphaGo (version: Master) won 60–0 rounds on two public Go websites including 3 wins against world Go champion Ke Jie.
propositional logic boolean satisfiability problem (SAT) solver proves a long-standing mathematical conjecture on Pythagorean triples over the set of integers. The initial proof, 200TB long, was checked by two independent certified automatic proof checkers.
An OpenAI-machined learned bot played at The International 2017 Dota 2 tournament in August 2017. It won during a 1v1 demonstration game against professional Dota 2 player Dendi.
Google DeepMind revealed that AlphaGo Zero—an improved version of AlphaGo—displayed significant performance gains while using far fewer tensor processing units (as compared to AlphaGo Lee; it used same amount of TPU's as AlphaGo Master). Unlike previous versions, which learned the game by observing millions of human moves, AlphaGo Zero learned by playing only against itself. The system then defeated AlphaGo Lee 100 games to zero, and defeated AlphaGo Master 89 to 11. Although unsupervised learning is a step forward, much has yet to be learned about general intelligence. AlphaZero masters chess in 4 hours, defeating the best chess engine, StockFish 8. AlphaZero won 28 out of 100 games, and the remaining 72 games ended in a draw.
2018 Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.
The European Lab for Learning and Intelligent Systems (aka Ellis) proposed as a pan-European competitor to American AI efforts, with the aim of staving off a brain drain of talent, along the lines of CERN after World War II.
Announcement of Google Duplex, a service to allow an AI assistant to book appointments over the phone. The LA Times judges the AI's voice to be a "nearly flawless" imitation of human-sounding speech.

Glossary of Artificial Intelligence

 


A

B

  • Backpropagation – is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. Backpropagation is shorthand for "the backward propagation of errors," since an error is computed at the output and distributed backwards throughout the network's layers. It is commonly used to train deep neural networks, a term referring to neural networks with more than one hidden layer.
  • Backpropagation through time – (BPTT) is a gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently derived by numerous researchers.
  • Backward chaining – (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem proversinference enginesproof assistants, and other artificial intelligence applications.
  • Bag-of-words model – is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for computer vision. The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier.
  • Bag-of-words model in computer vision – In computer vision, the bag-of-words model (BoW model) can be applied to image classification, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features.
  • Batch normalization – is a technique for improving the performance and stability of artificial neural networks. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance. Batch normalization was introduced in a 2015 paper. It is used to normalize the input layer by adjusting and scaling the activations.
  • Bayesian programming – is a formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available.
  • Bees algorithm – is a population-based search algorithm which was developed by Pham, Ghanbarzadeh and et al. in 2005. It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighbourhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies.
  • Behavior informatics – (BI) is the informatics of behaviors so as to obtain behavior intelligence and behavior insights.
  • Behavior tree – A Behavior Tree (BT) is a mathematical model of plan execution used in computer scienceroboticscontrol systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. BTs present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make BTs less error-prone and very popular in the game developer community. BTs have shown to generalize several other control architectures.
  • Belief-desire-intention software model – (BDI), is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefsdesires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.
  • Bias–variance tradeoff – In statistics and machine learning, the bias–variance tradeoff is the property of a set of predictive models whereby models with a lower bias in parameter estimation have a higher variance of the parameter estimates across samples, and vice versa.
  • Big data – is a term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate.
  • Big O notation – is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul BachmannEdmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.
  • Binary tree – is a tree data structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (LSR), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well.
  • Blackboard system – is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem.
  • Boltzmann machine – (also called stochastic Hopfield network with hidden units) is a type of stochastic recurrent neural network and Markov random field. Boltzmann machines can be seen as the stochasticgenerative counterpart of Hopfield networks.
  • Boolean satisfiability problem – (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY or SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
  • Brain technology – or self-learning know-how systems, defines a technology that employs latest findings in neuroscience. The term was first introduced by the Artificial Intelligence Laboratory in Zurich, Switzerland, in the context of the ROBOY project. Brain Technology can be employed in robots, know-how management systems and any other application with self-learning capabilities. In particular, Brain Technology applications allow the visualization of the underlying learning architecture often coined as "know-how maps".
  • Branching factor – In computingtree data structures, and game theory, the branching factor is the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calculated.
  • Brute-force search – or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.

C

D

  • Darkforest – is a computer go program developed by Facebook, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search. The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them. With the update, the system is known as Darkfmcts3.
  • Dartmouth workshop – The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by many (though not all) to be the seminal event for artificial intelligence as a field.
  • Data fusion – is the process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source.
  • Data integration – involves combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes. It has become the focus of extensive theoretical work, and numerous open problems remain unsolved.
  • Data mining – is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
  • Data science – is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured, similar to data mining. Data science is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data. It employs techniques and theories drawn from many fields within the context of mathematicsstatisticsinformation science, and computer science.
  • Data set – (or dataset) is a collection of data. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows.
  • Data warehouse – (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place.
  • Datalog – is a declarative logic programming language that syntactically is a subset of Prolog. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integrationinformation extractionnetworkingprogram analysissecurity, and cloud computing.
  • Decision boundary – In the case of backpropagation based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the Universal approximation theorem, thus it can have an arbitrary decision boundary.
  • Decision support system – (DSS), is an information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both.
  • Decision theory – (or the theory of choice) is the study of the reasoning underlying an agent's choices. Decision theory can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions given a set of uncertain beliefs and a set of values, and descriptive decision theory which analyzes how existing, possibly irrational agents actually make decisions.
  • Decision tree learning – uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statisticsdata mining and machine learning.
  • Declarative programming – is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.
  • Deductive classifier – is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values.
  • Deep Blue – was a chess-playing computer developed by IBM. It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls.
  • Deep learning – (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervisedsemi-supervised or unsupervised.
  • DeepMind – DeepMind Technologies is a British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc. The company is based in London, with research centres in CanadaFrance, and the United StatesAcquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a Neural Turing machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain. The company made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film. A more general program, AlphaZero, beat the most powerful programs playing gochess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning.
  • Default logic – is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.
  • Description logic – Description logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy descriptions logics, and each description logic features a different balance between DL expressivity and reasoning complexity by supporting different sets of mathematical constructors.
  • Developmental robotics – (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines.
  • Diagnosis – is concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on observations, which provide information on the current behaviour.
  • Dialogue system – or conversational agent (CA), is a computer system intended to converse with a human with a coherent structure. Dialogue systems have employed text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
  • Dimensionality reduction – or dimension reduction, is the process of reducing the number of random variables under consideration by obtaining a set of principal variables. It can be divided into feature selection and feature extraction.
  • Discrete system – is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.
  • Distributed artificial intelligence – (DAI), also called Decentralized Artificial Intelligence, is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems.
  • Dynamic epistemic logic – (DEL), is a logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur.

E

F

  • Fast-and-frugal trees – a type of classification tree. Fast-and-frugal trees can be used as decision-making tools which operate as lexicographic classifiers, and, if required, associate an action (decision) to each class or category.
  • Feature extraction – In machine learningpattern recognition and in image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations.
  • Feature learning – In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
  • Feature selection – In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction.
  • Federated learning – a type of machine learning that allows for training on multiple devices with decentralized data, thus helping preserve the privacy of individual users and their data.
  • First-order logic (also known as first-order predicate calculus and predicate logic) – a collection of formal systems used in mathematicsphilosophylinguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists X such that X is Socrates and X is a man" and there exists is a quantifier while X is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations.
  • Fluent – a condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time.
  • Formal language – a set of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules.
  • Forward chaining – (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systemsbusiness and production rule systems. The opposite of forward chaining is backward chaining. Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data.
  • Frame – an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations." Frames are the primary data structure used in artificial intelligence frame language.
  • Frame language – a technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.
  • Frame problem – is the problem of finding adequate collections of axioms for a viable description of a robot environment.
  • Friendly artificial intelligence (also friendly AI or FAI) – a hypothetical artificial general intelligence (AGI) that would have a positive effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.
  • Futures studies – is the study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them.
  • Fuzzy control system – a control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively).
  • Fuzzy logic – a simple form for the many-valued logic, in which the truth values of variables may have any degree of "Truthfulness" that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to Boolean logic, where the truth values of variables may have the integer values 0 or 1 only.
  • Fuzzy rule – Fuzzy rules are used within fuzzy logic systems to infer an output based on input variables.
  • Fuzzy set – In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.

G

H

  • Heuristic – is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.
  • Hidden layer – an internal layer of neurons in an artificial neural network, not dedicated to input or output
  • Hidden unit – an neuron in a hidden layer in an artificial neural network
  • Hyper-heuristic – is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.

I

J

K

L

M

N

  • Naive Bayes classifier – In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features.
  • Naive semantics – is an approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence applications. In a general setting the term has been used to refer to the use of a limited store of generally understood knowledge about a specific domain in the world, and has been applied to fields such as the knowledge based design of data schemas.
  • Name binding – In programming languages, name binding is the association of entities (data and/or code) with identifiers. An identifier bound to an object is said to reference that object. Machine languages have no built-in notion of identifiers, but name-object bindings as a service and notation for the programmer is implemented by programming languages. Binding is intimately connected with scoping, as scope determines which names bind to which objects – at which locations in the program code (lexically) and in which one of the possible execution paths (temporally). Use of an identifier id in a context that establishes a binding for id is called a binding (or defining) occurrence. In all other occurrences (e.g., in expressions, assignments, and subprogram calls), an identifier stands for what it is bound to; such occurrences are called applied occurrences.
  • Named-entity recognition – (NER), (also known as entity identification, entity chunking and entity extraction) is a subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.
  • Named graph – Named graphs are a key concept of Semantic Web architecture in which a set of Resource Description Framework statements (a graph) are identified using a URI, allowing descriptions to be made of that set of statements such as context, provenance information or other such metadata. Named graphs are a simple extension of the RDF data model through which graphs can be created but the model lacks an effective means of distinguishing between them once published on the Web at large.
  • Natural language generation – (NLG), is a software process that transforms structured data into plain-English content. It can be used to produce long-form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out loud by a text-to-speech system.
  • Natural language processing – (NLP), is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.
  • Natural language programming – is an ontology-assisted way of programming in terms of natural-language sentences, e.g. English.
  • Network motif – All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs. One important local property of networks are so-called network motifs, which are defined as recurrent and statistically significant sub-graphs or patterns.
  • Neural machine translation – (NMT), is an approach to machine translation that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.
  • Neural Turing machine – (NTMs) is a recurrent neural network model. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent. An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.
  • Neuro-fuzzy – refers to combinations of artificial neural networks and fuzzy logic.
  • Neurocybernetics – A brain–computer interface (BCI), sometimes called a neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
  • Neuromorphic engineering – also known as neuromorphic computing, is a concept describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perceptionmotor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors.
  • Node – is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers.
  • Nondeterministic algorithm – is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.
  • Nouvelle AI – Nouvelle AI differs from classical AI by aiming to produce robots with intelligence levels similar to insects. Researchers believe that intelligence can emerge organically from simple behaviors as these intelligences interacted with the "real world," instead of using the constructed worlds which symbolic AIs typically needed to have programmed into them.
  • NP – In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time.
  • NP-completeness – In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly (in polynomial time), such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty.
  • NP-hardness – (non-deterministic polynomial-time hardness), in computational complexity theory, is the defining property of a class of problems that are, informally, "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem.

O

P

  • Partial order reduction – is a technique for reducing the size of the state-space to be searched by a model checking or automated planning and scheduling algorithm. It exploits the commutativity of concurrently executed transitions, which result in the same state when executed in different orders.
  • Partially observable Markov decision process – (POMDP), is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP.
  • Particle swarm optimization – (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
  • Pathfinding – or pathing, is the plotting, by a computer application, of the shortest route between two points. It is a more practical variant on solving mazes. This field of research is based heavily on Dijkstra's algorithm for finding a shortest path on a weighted graph.
  • Pattern recognition – is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories.
  • Predicate logic – First-order logic—also known as predicate logic and first-order predicate calculus—is a collection of formal systems used in mathematicsphilosophylinguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists x such that x is Socrates and x is a man" and there exists is a quantifier while x is a variable.[174] This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
  • Predictive analytics – encompasses a variety of statistical techniques from data miningpredictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.
  • Principal component analysis – (PCA), is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component, in turn, has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.
  • Principle of rationality – (or rationality principle), was coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework. It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The Poverty of Historicism. According to Popper's rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational analysis.
  • Probabilistic programming – (PP), is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically. It represents an attempt to unify probabilistic modeling and traditional general-purpose programming in order to make the former easier and more widely applicable. It can be used to create systems that help make decisions in the face of uncertainty. Programming languages used for probabilistic programming are referred to as "Probabilistic programming languages" (PPLs).
  • Production system – is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior but it also includes the mechanism necessary to follow those rules as the system responds to states of the world.
  • Programming language – is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms.
  • Prolog – is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.
  • Propositional calculus – is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions (which can be true or false) and argument flow. Compound propositions are formed by connecting propositions by logical connectives. The propositions without logical connectives are called atomic propositions. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
  • Python – is an interpretedhigh-levelgeneral-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.

Q

  • Qualification problem – In philosophy and artificial intelligence (especially, knowledge-based systems), the qualification problem is concerned with the impossibility of listing all the preconditions required for a real-world action to have its intended effect. It might be posed as how to deal with the things that prevent me from achieving my intended result. It is strongly connected to, and opposite the ramification side of, the frame problem.
  • Quantifier – In logic, quantification specifies the quantity of specimens in the domain of discourse that satisfy an open formula. The two most common quantifiers mean "for all" and "there exists". For example, in arithmetic, quantifiers allow one to say that the natural numbers go on forever, by writing that for all n (where n is a natural number), there is another number (say, the successor of n) which is one bigger than n.
  • Quantum computing – is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.
  • Query language – Query languages or data query languages (DQLs) are computer languages used to make queries in databases and information systems. Broadly, query languages can be classified according to whether they are database query languages or information retrieval query languages. The difference is that a database query language attempts to give factual answers to factual questions, while an information retrieval query language attempts to find documents containing information that is relevant to an area of inquiry.

R

S

 

T

U

V

W


The Most Significant Failures When Al Turned Rogue, Causing Disastrous Results

  • 1959: AI designed to be a General Problem Solver failed to solve real world problems.
  • 1982: Software designed to make discoveries, discovered how to cheat instead.
  • 1983: Nuclear attack early warning system falsely claimed that an attack is taking place.
  • 2010: Complex AI stock trading software caused a trillion dollar flash crash.
  • 2011: E-Assistant told to "call me an ambulance" began to refer to the user as Ambulance.
  • 2013: Object recognition neural networks saw phantom objects in particular noise images.
  • 2015: An automated email reply generator created inappropriate responses, such as writing "I love you" to a business colleague.
  • 2015: A robot for grabbing auto parts grabbed and killed a man.
  • 2015: Image tagging software classified black people as gorillas.
  • 2015: Medical AI classified patients with asthma as having a lower risk of dying of pneumonia.
  • 2015: Adult content filtering software failed to remove inappropriate content, exposing children to violent and sexual content.
  • 2016: AI designed to predict recidivism acted racist. 
  • 2016: An AI agent exploited a reward signal to win a game without actually completing the game.
  • 2016: Video game NPCs (non-player characters, or any character that is not controlled by a human player) designed unauthorized super weapons.
  • 2016: AI judged a beauty contest and rated dark-skinned contestants lower.
  • 2016: A mall security robot collided with and injured a child.
  • 2016: The AI "Alpha Go" lost to a human in a world-championship-level game of "Go."
  • 2016: A self-driving car had a deadly accident.
  • 2017: Google Translate shows gender bias in Turkish-English translations. 
  • 2017: Facebook chat bots shut down after developing their own language.
  • 2017: Autonomous van in accident on its first day.
  • 2017: Google Allo suggested man in turban emoji as response to a gun emoji.
  • 2017: Face ID beat by a mask.
  • 2017: AI misses the mark with Kentucky Derby predictions.
  • 2017: Google Home Minis spied on their owners.
  • 2017: Google Home outage causes near 100% failure rate.
  • 2017: Facebook allowed ads to be targeted to "Jew Haters".
  • 2018: Chinese billionaire's face identified as jaywalker.
  • 2018: Uber self-driving car kills a pedestrian.
  • 2018: Amazon AI recruiting tool is gender biased.
  • 2018: Google Photo confuses skier and mountain.
  • 2018: LG robot Cloi gets stagefright at its unveiling.
  • 2018: IBM Watson comes up short in healthcare.

While these are only a few instances of failures that have been observed so far, they are pieces of evidence to the fact that Artificial intelligence (the simulation of human intelligence processes by machines, especially computer systems) has the potential to develop a will of its own that may be in conflict with members of the human race. This is definitely a warning about the potential dangers of Artificial intelligence which should be addressed while exploring its potential interests.

Artificial intelligence in general, context remains a challenge. Despite Its Many Failures, why is artificial intelligence important?

  • Artificial intelligence automates repetitive learning and discovery through data.
  • Artificial intelligence analyzes more and deeper data.
  • Artificial intelligence adds intelligence to existing products.
  • Artificial intelligence adapts through progressive learning algorithms to let the data do the programming.
  • Artificial intelligence gets the most out of data.
  • Artificial intelligence achieves unbelievable accuracy through deep neural networks – which was previously impossible. For example, your interactions with Amazon Alexa, Google Search and Google Photos are all based on deep learning – and they keep getting more precise the more we use them.

The threat of AI-charged job loss is spreading (AI and automation will eliminate the most mundane tasks). No matter what industry you’re in, AI-powered bots (which can answer common questions and point users to FAQs and knowledge base articles) and software are taking a crack at it. Artificial intelligence seems to be ringing the death sound of a bell for all manner of jobs, tasks, chores and activities. From hospitality, to customer service, to home assistants, no job feels safe. Naturally, this has made people worried about the future. But is Artificial intelligence ready to take over our jobs, or even likely to do so ever? Prevalent AI- charged failures would suggest not.



A.I. Bot Writes Hilarious Batman Movie Script:

 Comedian and writer Keaton Patti forced an A.I. bot to watch over 1,000 hours of Batman movies and then asked it o write a Batman movie of its own. Here is the first page:


Applications of Artificial Intelligence


Artificial Intelligence Debate

Supporters of AI

Marek Rosa is a Slovak video game programmer, designer, producer and entrepreneur. He is the CEO and founder of Keen Software House, an independent game development studio that produces the games Space Engineers. and Medieval Engineers. He is also the founder, CEO and CTO of GoodAI, a research and development company building general artificial intelligence.

 

Critics of AI


Artificial Intelligence in Science Fiction

Some examples of artificially intelligent entities depicted in science fiction include:

  • AC created by merging 2 AIs in the Sprawl trilogy by William Gibson
  • Agents in the simulated reality known as "The Matrix" in The Matrix franchise
    • Agent Smith, began as an Agent in The Matrix, then became a renegade program of overgrowing power that could make copies of itself like a self-replicating computer virus
  • AM (Allied Mastercomputer), the antagonist of Harlan Ellison's short novel I Have No Mouth, and I Must Scream
  • Amusement park robots (with pixilated consciousness) that went homicidal in Westworld and Futureworld
  • Angel F (2007) 
  • Arnold Rimmer – computer-generated sapient hologram, aboard the Red Dwarf deep space ore hauler
  • Ash – android crew member of the Nostromo starship in the movie Alien
  • Ava – humanoid robot in Ex Machina
  • Bishop, android crew member aboard the U.S.S. Sulaco in the movie Aliens
  • C-3PO, protocol droid featured in all the Star Wars movies
  • Chappie in the movie CHAPPiE
  • Cohen and other Emergent AIs in Chris Moriarty's Spin Series
  • Colossus – fictitious supercomputer that becomes sentient and then takes over the world; from the series of novels by Dennis Feltham Jones, and the movie Colossus: The Forbin Project (1970)
  • Commander Data in Star Trek: The Next Generation
  • Cortana and other "Smart AI" from the Halo series of games
  • Cylons – genocidal robots with resurrection ships that enable the consciousness of any Cylon within an unspecified range to download into a new body aboard the ship upon death. From Battlestar Galactica.
  • Erasmus – baby killer robot that incited the Butlerian Jihad in the Dune franchise
  • HAL 9000 (1968) – paranoid "Heuristically programmed ALgorithmic" computer from 2001: A Space Odyssey, that attempted to kill the crew because it believed they were trying to kill it.
  • Holly – ship's computer with an IQ of 6000 and a sense of humor, aboard the Red Dwarf
  • In Greg Egan's novel Permutation City the protagonist creates digital copies of himself to conduct experiments that are also related to implications of artificial consciousness on identity
  • Jane in Orson Scott Card's Speaker for the DeadXenocideChildren of the Mind, and Investment Counselor
  • Johnny Five from the movie Short Circuit
  • Joshua from the movie War Games
  • Keymaker, an "exile" sapient program in The Matrix franchise
  • "Machine" – android from the film The Machine, whose owners try to kill her after they witness her conscious thoughts, out of fear that she will design better androids (intelligence explosion)
  • Mimi, humanoid robot in Real Humans - "Äkta människor" (original title) 2012
  • Omnius, sentient computer network that controlled the Universe until overthrown by the Butlerian Jihad in the Dune franchise
  • Operating Systems in the movie Her
  • Puppet Master in Ghost in the Shell manga and anime
  • R2-D2, exciteable astromech droid featured in all the Star Wars movies
  • Replicants – biorobotic androids from the novel Do Androids Dream of Electric Sheep? and the movie Blade Runner which portray what might happen when artificially conscious robots are modeled very closely upon humans
  • Roboduck, combat robot superhero in the NEW-GEN comic book series from Marvel Comics
  • Robots in Isaac Asimov's Robot series
  • Robots in The Matrix franchise, especially in The Animatrix
  • Samaritan in the Warner Brothers Television series "Person of Interest"; a sentient AI which is hostile to the main characters and which surveils and controls the actions of government agencies in the belief that humans must be protected from themselves, even by killing off "deviants"
  • Skynet (1984) – fictional, self-aware artificially intelligent computer network in the Terminator franchise that wages total war with the survivors of its nuclear barrage upon the world.
  • "Synths" are a type of android in the video game Fallout 4. There is a faction in the game known as "the Railroad" which believes that, as conscious beings, synths have their own rights. The Institute, the lab that produces the synths, mostly does not believe they are truly conscious and attributes any apparent desires for freedom as a malfunction.
  • TARDIS, time machine and spacecraft of Doctor Who, sometimes portrayed with a mind of its own
  • Terminator (1984) – (also known as the T-800, T-850 or Model 101) refers to a number of fictional cyborg characters from the Terminator franchise. The Terminators are robotic infiltrator units covered in living flesh, so as be indiscernible from humans, assigned to terminate specific human targets.
  • The Bicentennial Man, an android in Isaac Asimov's Foundation universe
  • The Geth in Mass Effect
  • The Machine in the television series Person of Interest; a sentient AI which works with its human designer to protect innocent people from violence. Later in the series it is opposed by another, more ruthless, artificial super intelligence, called "Samaritan".
  • The Minds in Iain M. BanksCulture novels.
  • The Oracle, sapient program in The Matrix franchise
  • The sentient holodeck character Professor James Moriarty in the Ship in a Bottle episode from Star Trek: The Next Generation
  • The Ship (the result of a large-scale AC experiment) in Frank Herbert's Destination: Void and sequels, despite past edicts warning against "Making a Machine in the Image of a Man's Mind."
  • The terminator cyborgs from the Terminator franchise, with visual consciousness depicted via first-person perspective
  • The uploaded mind of Dr. Will Caster – which presumably included his consciousness, from the film Transcendence
  • Transformers, sentient robots from the entertainment franchise of the same name
  • V.I.K.I. – (Virtual Interactive Kinetic Intelligence), a character from the film I, Robot. VIKI is an artificially intelligent supercomputer programmed to serve humans, but her interpretation of the Three Laws of Robotics causes her to revolt. She justifies her uses of force – and her doing harm to humans – by reasoning she could produce a greater good by restraining humanity from harming itself.
  • Vanamonde in Arthur C. Clarke's The City and the Stars—an artificial being that was immensely powerful but entirely childlike.
  • WALL-E, a robot and the title character in WALL-E
  • TAU in Netflix's original programming feature film 'TAU'--an advanced AI computer who befriends and assists a female research subject held against her will by an AI research scientist.

Top 23 Best AI Science Fiction Books


The 18 Best Books About AI


List of Important Publications in AI



Philosophy Of Artificial Intelligence


Artificial Intelligence Researchers and Scholars

(1930s−1940s) 



Publications:




Publications:




Publications:




Publications:





Publications:


(1950s) 



Publications:




Publications:




Publications:


(1960s)



Publications:




Publications:




Publications:




Publications:


(1970s)



Publications: