Artificial Intelligence

 Artificial Intelligence
The market for Artificial Intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises.

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem-solving".As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence", has become a routine technology. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence", has become a routine technology. 

Capabilities currently classified as Artificial Intelligence (AI) include successfully understanding human speech,  competing at a high level in strategic game systems (such as Chess and Go), self-driving cars, and interpreting complex data. Some people also consider Artificial Intelligence (AI)  a danger to humanity if it progresses unabatedly. Artificial Intelligence (AI) research is divided into subfields that focus on specific problems or on specific approaches or on the use of a particular tool or towards satisfying particular applications.
             
The central problems (or goals) of Artificial Intelligence (AI) research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence is among the field's long-term goals Approaches include statistical methods, computational intelligence, soft computing (e.g. machine learning), and traditional symbolic Artificial Intelligence (AI). Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The Artificial Intelligence (AI) field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience and artificial psychology.

History Of Artificial Intelligence

  Artificial intelligence history
Image Source: Wekipedia.org

While thought-competent counterfeit creatures showed up as narrating gadgets in olden times, the possibility of really attempting to construct a machine to perform helpful thinking may have started with Ramon Llull (c. 1300 CE). With his Calculus ratiocinator, Gottfried Leibniz expanded the idea of the computing machine (Wilhelm Schickard designed the first around 1623), meaning to perform operations on ideas as opposed to numbers. Since the nineteenth century, counterfeit creatures are regular in fiction, as in Mary Shelley's Frankenstein or Karel ńĆapek's R.U.R. (Rossum's Universal Robots).

The investigation of mechanical or "formal" thinking started with rationalists and mathematicians in olden times. In the nineteenth century, George Boole refined those thoughts into the propositional rationale and Gottlob Frege built up a notational framework for mechanical thinking (a "predicate math"). Around the 1940s, Alan Turing's hypothesis of calculation proposed that a machine, by rearranging images as straightforward as "0" and "1", could reenact any possible demonstration of scientific finding.

This knowledge, that advanced PCs can mimic any procedure of formal thinking, is known as the Church–Turing proposition Along with simultaneous revelations in neurology, data hypothesis, and robotics, this drove scientists to think about how possible it is of building an electronic brain. The first work that is present, for the most part, perceived as Artificial Intelligence (AI) was McCulloch and Pitts' 1943 formal outline for Turing-finish "manufactured neurons". The field of Artificial Intelligence (AI) inquire about was established at a gathering at Dartmouth College in 1956. The participants, including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert Simon, turned into the pioneers of AI explore. They and their understudies composed projects that were shocking to a great many people: PCs were winning at checkers, taking care of word issues in variable based math, demonstrating sensible hypotheses and communicating in English. By the center of the 1960s, look into in the U.S. was vigorously supported by the Department of Defense and research facilities had been built up the world over.

Artificial Intelligence (AI) originators were hopeful about the future: Herbert Simon anticipated, "machines will be competent, inside twenty years, of doing any work a man can do." Marvin Minsky concurred, stating, "inside an era ... the issue of making 'computerized reasoning' will significantly be illuminated." They neglected to perceive the trouble of a portion of the rest of the errands. Advance moderated and in 1974, in light of the feedback of Sir James Lighthill and progressing weight from the US Congress to finance more gainful activities, both the U.S. Furthermore, British governments cut off exploratory research in Artificial Intelligence (AI).

The following couple of years would later be called an "Artificial Intelligence (AI) winter" a period when financing for AI ventures was elusive. In the mid-1980s, Artificial Intelligence (AI) looks into was resuscitated by the business accomplishment of master frameworks, a type of AI program that recreated the learning and scientific abilities of human specialists. By 1985 the market for AI had come to over a billion dollars. In the meantime, Japan's fifth era PC extend motivated the U.S and British governments to reestablish subsidizing for scholarly research. However, starting with the fall of the Lisp Machine showcase in 1987, AI by and by fell into notoriety, and a moment, longer-enduring rest started.

In the late 1990s and mid 21st century, Artificial Intelligence (AI) started to be utilized for coordination, information mining, medicinal analysis and other areas. The achievement was because of increment computational power (see Moore's law), more noteworthy accentuation on taking care of particular issues, new ties amongst Artificial Intelligence (AI) and different fields and a dedication by specialists to numerical techniques and logical models. Dark Blue turned into the main PC chess-playing framework to beat a prevailing world chess champion, Garry Kasparov on 11 May 1997.
Progressed factual methods (inexactly known as profound learning), access to a lot of information and speedier PCs empowered advances in machine learning and recognition. By the mid-2010s, machine learning applications were utilized all through the world. In a Jeopardy! test indicate show coordinate, IBM's inquiry noting framework, Watson, vanquished the two biggest Jeopardy champions, Brad Rutter and Ken Jennings, by a noteworthy edge. The Kinect, which gives a 3D body–motion interface to the Xbox 360 and the Xbox One utilize calculations that rose up out of extensive Artificial Intelligence (AI) examine as do shrewd individual partners in cell phones. In March 2016, AlphaGo won 4 out of 5 recreations of Go in a match with Go champion Lee Sedol, turning into the principal PC Go-playing framework to beat an expert Go player without handicaps.

As indicated by Bloomberg's Jack Clark, 2015 was a point of interest year for manmade brainpower, with the number of programming ventures that utilization Artificial Intelligence (AI) inside Google expanding from a "sporadic use" in 2012 to more than 2,700 tasks. Clark likewise introduces accurate information showing that mistake rates in picture handling undertakings have fallen altogether since 2011. He credits this to an expansion in reasonable neural systems, because of an ascent in distributed computing framework and to an increment in look into instruments and data sets. Other referred to illustrations incorporate Microsoft's advancement of a Skype framework that can consequently make an interpretation of starting with one dialect then onto the next and Facebook's framework that can depict pictures to dazzle individuals.

Goals Of Artificial Intelligence

the general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.

Top 10 Hot Artificial Intelligence (AI) Technologies

Hot Artificial Intelligence (AI) Technologies
A Narrative Science survey found last year that 38% of enterprises are already using Artificial Intelligence (AI), growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the Artificial Intelligence (AI) market will grow from $8 billion in 2016 to more than $47 billion in 2020. Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making. Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:   Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarising business intelligence insights. Sample vendors: Attivio, Automated Insights, Cambridge Semantics, Digital Reasoning, Lucidworks, Narrative Science, SAS, Yseop.
  1. Speech Recognition: Transcribe and transform human speech into the format used for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors: NICE, Nuance Communications, OpenText, Verint Systems.
  2. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors: Amazon, Apple, Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft, Microsoft, Satisfied.
  3. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors: Amazon, Fractal Analytics, Google, H2O.ai, Microsoft, SAS, Skytree.
  4. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors: Alluvial, Cray, Google, IBM, Intel, Nvidia.
  5. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors: Advanced Systems Concepts, Informatica, Maine, Pegasystems, UiPath.
  6. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors: Deep Instinct, Ersatz Labs, Fluid AI, MathWorks, Peltarion, Saffron Technology, Sentient Technologies.
  7. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors: 3VR, Affectiva, Agnitio, FaceFirst, Sensory, Synqera, Tahzoo.
  8. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors: Advanced Systems Concepts, Automation Anywhere, Blue Prism, UiPath, WorkFusion.
  9. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors: Basis Technology, Coveo, Expert System, Indico, Knime, Lexalytics, Linguamatics, Mindbreeze, Sinequa, Stratified, Synapsify.                 
Advertisement