0 votes
cách đây ,post bởi (200 điểm)
image

Artificial basic intelligence (AGI) is a kind of expert system (AI) that matches or exceeds human cognitive abilities across a wide variety of cognitive jobs. This contrasts with narrow AI, which is restricted to particular jobs. [1] Artificial superintelligence (ASI), on the other hand, describes AGI that considerably surpasses human cognitive abilities. AGI is considered among the meanings of strong AI.


Creating AGI is a primary goal of AI research and of companies such as OpenAI [2] and Meta. [3] A 2020 study recognized 72 active AGI research study and advancement projects across 37 nations. [4]

The timeline for attaining AGI stays a topic of ongoing debate among scientists and professionals. Since 2023, some argue that it might be possible in years or decades; others maintain it may take a century or longer; a minority think it may never be attained; and another minority claims that it is already here. [5] [6] Notable AI scientist Geoffrey Hinton has expressed issues about the quick progress towards AGI, suggesting it could be achieved sooner than lots of anticipate. [7]

There is debate on the exact definition of AGI and relating to whether modern big language models (LLMs) such as GPT-4 are early types of AGI. [8] AGI is a common subject in science fiction and futures research studies. [9] [10]

Contention exists over whether AGI represents an existential threat. [11] [12] [13] Many professionals on AI have specified that alleviating the danger of human termination posed by AGI ought to be an international priority. [14] [15] Others discover the development of AGI to be too remote to present such a risk. [16] [17]

Terminology

image

AGI is likewise understood as strong AI, [18] [19] full AI, [20] human-level AI, [5] human-level smart AI, or basic smart action. [21]

Some academic sources book the term "strong AI" for forum.batman.gainedge.org computer programs that experience life or awareness. [a] On the other hand, weak AI (or narrow AI) has the ability to fix one particular issue however does not have general cognitive capabilities. [22] [19] Some academic sources utilize "weak AI" to refer more broadly to any programs that neither experience awareness nor have a mind in the same sense as human beings. [a]

Related principles consist of synthetic superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical kind of AGI that is far more usually smart than people, [23] while the notion of transformative AI relates to AI having a large influence on society, for instance, similar to the agricultural or industrial revolution. [24]

A structure for categorizing AGI in levels was proposed in 2023 by Google DeepMind scientists. They define five levels of AGI: emerging, proficient, specialist, virtuoso, and superhuman. For instance, a proficient AGI is specified as an AI that outperforms 50% of knowledgeable grownups in a vast array of non-physical jobs, and a superhuman AGI (i.e. a synthetic superintelligence) is similarly specified but with a limit of 100%. They think about large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI. [25]

Characteristics


Various popular definitions of intelligence have been proposed. One of the leading propositions is the Turing test. However, there are other widely known meanings, and some researchers disagree with the more popular approaches. [b]

Intelligence characteristics


Researchers usually hold that intelligence is required to do all of the following: [27]

reason, usage strategy, fix puzzles, and make judgments under unpredictability
represent understanding, consisting of good sense knowledge
strategy
discover
- interact in natural language
- if essential, incorporate these skills in completion of any offered objective


Many interdisciplinary approaches (e.g. cognitive science, computational intelligence, and choice making) consider extra characteristics such as imagination (the capability to form unique psychological images and concepts) [28] and autonomy. [29]

Computer-based systems that exhibit numerous of these abilities exist (e.g. see computational imagination, automated thinking, choice support system, robotic, evolutionary calculation, intelligent representative). There is debate about whether contemporary AI systems possess them to a sufficient degree.


Physical qualities


Other capabilities are thought about desirable in smart systems, as they might affect intelligence or help in its expression. These consist of: [30]

- the ability to sense (e.g. see, hear, etc), and
- the capability to act (e.g. move and control things, change place to explore, and so on).


This includes the ability to spot and react to danger. [31]

Although the ability to sense (e.g. see, hear, and wino.org.pl so on) and the ability to act (e.g. move and manipulate objects, modification place to explore, mariskamast.net and so on) can be desirable for some smart systems, [30] these physical capabilities are not strictly required for an entity to qualify as AGI-particularly under the thesis that large language models (LLMs) may already be or end up being AGI. Even from a less optimistic point of view on LLMs, there is no firm requirement for an AGI to have a human-like type; being a silicon-based computational system suffices, provided it can process input (language) from the external world in location of human senses. This analysis lines up with the understanding that AGI has never been proscribed a specific physical embodiment and hence does not require a capability for mobility or conventional "eyes and ears". [32]

Tests for human-level AGI


Several tests indicated to confirm human-level AGI have been considered, consisting of: [33] [34]

The idea of the test is that the maker needs to attempt and pretend to be a man, by addressing questions put to it, and it will just pass if the pretence is reasonably convincing. A significant part of a jury, who should not be professional about devices, should be taken in by the pretence. [37]

AI-complete problems


An issue is informally called "AI-complete" or "AI-hard" if it is thought that in order to resolve it, one would need to implement AGI, since the option is beyond the abilities of a purpose-specific algorithm. [47]

There are numerous issues that have been conjectured to require basic intelligence to resolve as well as human beings. Examples include computer vision, natural language understanding, and handling unanticipated scenarios while solving any real-world issue. [48] Even a particular task like translation needs a device to read and compose in both languages, follow the author's argument (reason), comprehend the context (understanding), and consistently reproduce the author's original intent (social intelligence). All of these problems need to be fixed all at once in order to reach human-level device performance.

image

However, a number of these jobs can now be performed by modern-day big language designs. According to Stanford University's 2024 AI index, AI has actually reached human-level efficiency on numerous standards for checking out understanding and visual thinking. [49]

History


Classical AI


Modern AI research study began in the mid-1950s. [50] The first generation of AI scientists were encouraged that artificial basic intelligence was possible and that it would exist in simply a few decades. [51] AI pioneer Herbert A. Simon composed in 1965: "machines will be capable, within twenty years, of doing any work a male can do." [52]

Their predictions were the motivation for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could develop by the year 2001.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
To avoid this verification in future, please log in or register.
...