const ghostSearchApiKey = '93722e96ae625aaeb360b7f295'

Искусственный интеллект (ИИ) что это такое, статья от IBM

Код машин 29 сент. 2021 г.

Искусственный интеллект использует компьютеры и машины для имитации возможностей человеческого разума в решении проблем и принятии решений.

Что такое искусственный интеллект?

Хотя за последние несколько десятилетий появилось несколько определений искусственного интеллекта (ИИ), Джон Маккарти предлагает следующее определение в этой статье 2004 года (PDF, 106 КБ) : "Это наука и техника создания интеллектуальных машин, особенно интеллектуальных компьютерных программ. Это связано с аналогичной задачей использования компьютеров для понимания человеческого интеллекта, но ИИ не должен ограничиваться методами, которые биологически наблюдаемы".

Однако за десятилетия до этого определения, начало диалога об искусственном интеллекте было обозначено основополагающей работой Алана Тьюринга "Вычислительная техника и интеллект" (PDF, 89,8 КБ) которая была опубликована в 1950 году. В этой статье Тьюринг, которого часто называют "отцом компьютерных наук", задает следующий вопрос: "Могут ли машины мыслить?" Оттуда он предлагает тест, теперь известный как "Тест Тьюринга", в котором человек-следователь будет пытаться отличить компьютерный ответ от человеческого текста. Хотя этот тест подвергся критике с момента его публикации, он остается важной частью истории ИИ, а также постоянной концепцией в философии, поскольку в нем используются идеи лингвистики.

Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach (link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:

Human approach:

  • Systems that think like humans
  • Systems that act like humans

Ideal approach:

  • Systems that think rationally
  • Systems that act rationally

Alan Turing’s definition would have fallen under the category of “systems that act like humans.”

At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.

Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartner’s hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow “a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovation’s relevance and role in a market or domain.” As Lex Fridman notes here (01:08:15) (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.

Types of artificial intelligence—weak AI vs. strong AI

Weak AI—also called Narrow AI or Artificial Narrow Intelligence (ANI)—is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. ‘Narrow’ might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.

Deep learning vs. machine learning

Since deep learning and machine learning tend to be used interchangeably, it’s worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

Visual Representation of how AI, ML and DL relate to one another

Deep learning is actually comprised of neural networks. “Deep” in deep learning refers to a neural network comprised of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. This is generally represented using the following diagram:

Diagram of Deep Neural Network

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesn’t necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.

Artificial intelligence applications

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

  • Speech recognition: It is also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, and it is a capability which uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search—e.g. Siri—or provide more accessibility around texting.
  • Customer service:  Online virtual agents are replacing human agents along the customer journey. They answer frequently asked questions (FAQs) around topics, like shipping, or provide personalized advice, cross-selling products or suggesting sizes for users, changing the way we think about customer engagement across websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents, messaging apps, such as Slack and Facebook Messenger, and tasks usually done by virtual assistants and voice assistants.
  • Computer vision: This AI technology enables computers and systems to derive meaningful information from digital images, videos and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.
  • Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This is used to make relevant add-on recommendations to customers during the checkout process for online retailers.
  • Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.

History of artificial intelligence: Key dates and names

The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:

  • 1950: Алан Тьюринг публикует Компьютерная техника и интеллект. В статье Тьюринг, известный тем, что взломал нацистский код "ЭНИГМА" во время Второй мировой войны, предлагает ответить на вопрос "могут ли машины мыслить?" и вводит тест Тьюринга, чтобы определить, может ли компьютер продемонстрировать тот же интеллект (или результаты того же интеллекта), что и человек. С тех пор ценность теста Тьюринга обсуждается до сих пор.
  • 1956: Джон Маккарти использует термин "искусственный интеллект" на первой в истории конференции по искусственному интеллекту в Дартмутском колледже. (Маккарти продолжал разработку языка Lisp.) Позже в том же году Аллен Ньюэлл, Джей Си Шоу и Герберт Саймон создают Теоретика логики, первую в истории запущенную программу искусственного интеллекта.
  • 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that 'learned' though trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book titled Perceptrons, which becomes both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.
  • 1980s: Neural networks which use a backpropagation algorithm to train itself become widely used in AI applications.
  • 1997: IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).
  • 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
  • 2015: Baidu's Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
  • 2016: DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported USD 400 million.

Artificial intelligence and IBM Cloud

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

  • Collect: Simplifying data collection and accessibility.
  • Organize: Creating a business-ready analytics foundation.
  • Analyze: Building scalable and trustworthy AI-driven systems.
  • Infuse: Integrating and optimizing systems across an entire business framework.
  • Modernize: Bringing your AI applications and systems to the cloud.

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

Теги

Все представленные на сайте материалы предназначены исключительно для образовательных целей и не предназначены для медицинских консультаций, диагностики или лечения. Администрация сайта, редакторы и авторы статей не несут ответственности за любые последствия и убытки, которые могут возникнуть при использовании материалов сайта.