أنشئ حسابًا أو سجّل الدخول للانضمام إلى مجتمعك المهني.
Artificial Intelligence vs Human Intelligence
In the field Education, intelligence is defined as the capability to understand, deal with and adapt to new situations. When it comes to Psychology, it is defined as the capability to apply knowledge to change one’s environment. In general, Human intelligence is the ability of humans to combine several cognitive processes to adapt to the environment. Artificial Intelligence is the field dedicated to developing machines that will be able to mimic and perform as humans.
What is Human Intelligence?
Human Intelligence is defined as the quality of the mind that is made up of capabilities to learn from past experience, adaptation to new situations, handling of abstract ideas and the ability to change his/her own environment using the gained knowledge. Investigators are still exited (after all these years) to find the meaning of intelligence (because they think they haven’t found the exact meaning of intelligence, yet). More recently psychological interpretation of intelligence has shifted towards the ability to adapt to the environment. For example, a physician learning to treat a patient with unfamiliar symptoms or an artist modifying a painting to change the impression it makes, comes under this definition very neatly. Effective adaptation requires perception, learning, memory, logical reasoning and solving problems. This means that intelligence is not particularly a mental process; it is rather a summation of these processes toward effective adaptation to the environment. So when it comes to the example of the physician, he/she is required to adapt by seeing material about the disease, learning the meaning behind the material, memorizing the most important facts and reasoning to understand the new symptoms. So, as a whole, intelligence is not considered a mere ability, but a combination of abilities.
What is Artificial Intelligence?
Artificial Intelligence (AI) is the field of computer science dedicated to developing machines that will be able to mimic and perform the same tasks just as a human would. AI researchers spend time on finding a feasible alternative to the human mind. The rapid development of computers after its arrival years ago has helped the researchers take great steps towards this goal of mimicking a human. Modern day applications like speech recognition, robots playing chess, table tennis and playing music have been making the dream of these researchers true. But according to AI philosophy, AI is considered to be divided in to two major types, namely Weak AI and Strong AI. Weak AI is the thinking focused towards the development of technology capable of carrying out pre-planned moves based on some rules and applying these to achieve a certain goal. Strong AI is developing technology that can think and function similar to humans, not just mimicking human behavior in a certain domain.
What is the difference between Artificial Intelligence and Human Intelligence?
Human intelligence revolves around adapting to the environment using a combination of several cognitive processes. The field of Artificial intelligence focuses on designing machines that can mimic human behavior. However, AI researchers are able go as far as implementing Weak AI, but not the Strong AI. In fact, some believe that Strong AI is never possible due to the various differences between the human brain and a computer. So, at the moment, the mere ability to mimic the human behavior is considered as Artificial Intelligence.
Difference #1: Brains are analogue; computers are digital
It’s easy to think that neurons are essentially binary, given that they fire an action potential if they reach a certain threshold, and otherwise do not fire. This superficial similarity to digital “1’s and0’s” belies a wide variety of continuous and non-linear processes that directly influence neuronal processing. For example, one of the primary mechanisms of information transmission appears to be the rate at which neurons fire – an essentially continuous variable. Similarly, networks of neurons can fire in relative synchrony or in relative disarray; this coherence affects the strength of the signals received by downstream neurons. Finally, inside each and every neuron is a leaky integrator circuit, composed of a variety of ion channels and continuously fluctuating membrane potentials.
Failure to recognize these important subtleties may have contributed to Minksy & Papert’s infamous mischaracterization of perceptrons, a neural network without an intermediate layer between input and output. In linear networks, any function computed by a3-layer network can also be computed by a suitably rearranged2-layer network. In other words, combinations of multiple linear functions can be modeled precisely by just a single linear function. Since their simple2-layer networks could not solve many important problems, Minksy & Papert reasoned that that larger networks also could not. In contrast, the computations performed by more realistic (i.e., nonlinear) networks are highly dependent on the number of layers – thus, “perceptrons” grossly underestimate the computational power of neural networks.
Difference #2: The brain uses content-addressable memory
In computers, information in memory is accessed by polling its precise memory address. This is known as byte-addressable memory. In contrast, the brain uses content-addressable memory, such that information can be accessed in memory through “spreading activation” from closely related concepts. For example, thinking of the word “fox” may automatically spread activation to memories related to other clever animals, fox-hunting horseback riders, or attractive members of the opposite sex.
The end result is that your brain has a kind of “built-in Google,” in which just a few cues (key words) are enough to cause a full memory to be retrieved. Of course, similar things can be done in computers, mostly by building massive indices of stored data, which then also need to be stored and searched through for the relevant information (incidentally, this is pretty much what Google does, with a few twists).
Although this may seem like a rather minor difference between computers and brains, it has profound effects on neural computation. For example, a lasting debate in cognitive psychology concerned whether information is lost from memory because of simply decay or because of interference from other information. In retrospect, this debate is partially based on the false asssumption that these two possibilities are dissociable, as they can be in computers. Many are now realizing that this debate represents a false dichotomy.
Difference #3: The brain is a massively parallel machine; computers are modular and serial
An unfortunate legacy of the brain-computer metaphor is the tendency for cognitive psychologists to seek out modularity in the brain. For example, the idea that computers require memory has lead some to seek for the “memory area,” when in fact these distinctions are far more messy. One consequence of this over-simplification is that we are only now learning that “memory” regions (such as the hippocampus) are also important for imagination, the representation of novel goals, spatial navigation, and other diverse functions.
Similarly, one could imagine there being a “language module” in the brain, as there might be in computers with natural language processing programs. Cognitive psychologists even claimed to have found this module, based on patients with damage to a region of the brain known as Broca’s area. More recent evidence has shown that language too is computed by widely distributed and domain-general neural circuits, and Broca’s area may also be involved in other computations (see here for more on this).
Difference #4: Processing speed is not fixed in the brain; there is no system clock
The speed of neural information processing is subject to a variety of constraints, including the time for electrochemical signals to traverse axons and dendrites, axonal myelination, the diffusion time of neurotransmitters across the synaptic cleft, differences in synaptic efficacy, the coherence of neural firing, the current availability of neurotransmitters, and the prior history of neuronal firing. Although there are individual differences in something psychometricians call “processing speed,” this does not reflect a monolithic or unitary construct, and certainly nothing as concrete as the speed of a microprocessor. Instead, psychometric “processing speed” probably indexes a heterogenous combination of all the speed constraints mentioned above.
Similarly, there does not appear to be any central clock in the brain, and there is debate as to how clock-like the brain’s time-keeping devices actually are. To use just one example, the cerebellum is often thought to calculate information involving precise timing, as required for delicate motor movements; however, recent evidence suggests that time-keeping in the brain bears more similarity to ripples on a pond than to a standard digital clock.
Difference #5 – Short-term memory is not like RAM
Although the apparent similarities between RAM and short-term or “working” memory emboldened many early cognitive psychologists, a closer examination reveals strikingly important differences. Although RAM and short-term memory both seem to require power (sustained neuronal firing in the case of short-term memory, and electricity in the case of RAM), short-term memory seems to hold only “pointers” to long term memory whereas RAM holds data that is isomorphic to that being held on the hard disk. (See here for more about “attentional pointers” in short term memory).
Unlike RAM, the capacity limit of short-term memory is not fixed; the capacity of short-term memory seems to fluctuate with differences in “processing speed” (see Difference #4) as well as with expertise and familiarity.
Difference #6: No hardware/software distinction can be made with respect to the brain or mind
For years it was tempting to imagine that the brain was the hardware on which a “mind program” or “mind software” is executing. This gave rise to a variety of abstract program-like models of cognition, in which the details of how the brain actually executed those programs was considered irrelevant, in the same way that a Java program can accomplish the same function as a C++ program.
Unfortunately, this appealing hardware/software distinction obscures an important fact: the mind emerges directly from the brain, and changes in the mind are always accompanied by changes in the brain. Any abstract information processing account of cognition will always need to specify how neuronal architecture can implement those processes – otherwise, cognitive modeling is grossly underconstrained. Some blame this misunderstanding for the infamous failure of “symbolic AI.”
Difference #7: Synapses are far more complex than electrical logic gates
Another pernicious feature of the brain-computer metaphor is that it seems to suggest that brains might also operate on the basis of electrical signals (action potentials) traveling along individual logical gates. Unfortunately, this is only half true. The signals which are propagated along axons are actually electrochemical in nature, meaning that they travel much more slowly than electrical signals in a computer, and that they can be modulated in myriad ways. For example, signal transmission is dependent not only on the putative “logical gates” of synaptic architecture but also by the presence of a variety of chemicals in the synaptic cleft, the relative distance between synapse and dendrites, and many other factors. This adds to the complexity of the processing taking place at each synapse – and it is therefore profoundly wrong to think that neurons function merely as transistors.
Difference #8: Unlike computers, processing and memory are performed by the same components in the brain
Computers process information from memory using CPUs, and then write the results of that processing back to memory. No such distinction exists in the brain. As neurons process information they are also modifying their synapses – which are themselves the substrate of memory. As a result, retrieval from memory always slightly alters those memories (usually making them stronger, but sometimes making them less accurate – see here for more on this).
Difference #9: The brain is a self-organizing system
This point follows naturally from the previous point – experience profoundly and directly shapes the nature of neural information processing in a way that simply does not happen in traditional microprocessors. For example, the brain is a self-repairing circuit – something known as “trauma-induced plasticity” kicks in after injury. This can lead to a variety of interesting changes, including some that seem to unlock unused potential in the brain (known as acquired savantism), and others that can result in profound cognitive dysfunction (as is unfortunately far more typical in traumatic brain injury and developmental disorders).
One consequence of failing to recognize this difference has been in the field of neuropsychology, where the cognitive performance of brain-damaged patients is examined to determine the computational function of the damaged region. Unfortunately, because of the poorly-understood nature of trauma-induced plasticity, the logic cannot be so straightforward. Similar problems underlie work on developmental disorders and the emerging field of “cognitive genetics”, in which the consequences of neural self-organization are frequently neglected .
Difference #: Brains have bodies
This is not as trivial as it might seem: it turns out that the brain takes surprising advantage of the fact that it has a body at its disposal. For example, despite your intuitive feeling that you could close your eyes and know the locations of objects around you, a series of experiments in the field of change blindness has shown that our visual memories are actually quite sparse. In this case, the brain is “offloading” its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice? A surprising set of experiments by Jeremy Wolfe has shown that even after being asked hundreds of times which simple geometrical shapes are displayed on a computer screen, human subjects continue to answer those questions by gaze rather than rote memory. A wide variety of evidence from other domains suggests that we are only beginning to understand the importance of embodiment in information processing.
Bonus Difference: The brain is much, much bigger than any [current] computer
Accurate biological models of the brain would have to include some,,,,, ( million billion) interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines, and that doesn’t include the influences of dendritic geometry, or the approximately1 trillion glial cells which may or may not be important for neural information processing. Because the brain is nonlinear, and because it is so much larger than all current computers, it seems likely that it functions in a completely different fashion. (See here for more on this.) The brain-computer metaphor obscures this important, though perhaps obvious, difference in raw computational power.
There is an extreme differences between Human intelligence and artificial intelligence, Human Intelligence using many factors such as (previous experience knowledge, Logic, knowledge base, motions and adaption with surrounded environment ) and through these thing we can get an reaction toward any kind of action and sudden actions however, the artificial intelligence has a term of Agent or Intelligent Agent in the economic world and this word has five sub-term like ( Simple reflex agents, Model-based reflex agents, Goal-based agents, Utility-based agent, Learning agents) and each one of these agent are identified by its working mechanism
1- Simple reflex agents: Simple reflex agents act only on the basis of the current precept, ignoring the rest of the precept history. The agent function is based on the condition-action rule: if condition then action.
2- Model-based reflex agents: A model-based agent can handle a partially observable environment. Its current state is stored inside the agent maintaining some kind of structure which describes the part of the world which cannot be seen
3- Goal-based agents: Goal-based agents further expand on the capabilities of the model-based agents, by using "goal" information. Goal information describes situations that are desirable. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state.
4- Utility-based agent “rational utility-based agent”: It is possible to define a measure of how desirable a particular state is and that’s the difference between this type of agent and other types it depend on the Agent state. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state.
5- Learning agents: Learning has an advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. The most important distinction is between the "learning element", which is responsible for making improvements, and the "performance element", which is responsible for selecting external actions.
In comparison with the human intelligence we can noticed that all kind of Agent doesn’t truly think and give creative ideas to figure how to solve particular situation as much as Human do, because it follow certain rules and it judged by it even after we have seen the Agent classes as it shows above, now we can conclude that our brain and its functionality is referring to how Greatness of the Creator ALLAH although, I don’t underestimate that new technology kind because it gives us ability to invent machines would give a hand to human in some field which can helpful by saving time, effort and draw a quick smile to the faces :)
Human Intelligence create artificial intelligence, an intelligent man is the one who do every thing for the sake of Allah ..
Basm allah alrahman alrahim
I don't believe in intelligence , I think that it is just fast thinking and it can be made by exercises , patience and experience [ bazn allah ] , and any person can make that at any time if his opportunity came [ be careful from bad opportunities that you may think that it is good , your real life is there and here is just a binning to there ! ]
suppose that we are in Egypt and we have ai robot doctor , what will happen ?
When I was in collage my doctor with all respect to them , they all was repeating what they are saying at every class room they tech , so what will differ if ai be a doctor especially on Egypt , I remember that one of my doctors said : why I give you every thing , and you must help your self and study , so that mean , he make restrictions and dose not give all information
what are the possibilities of chess moves ? and what is Deep Blue ? you will ask me why do you ask ?
I will tell you why , Deep Blue from the first artificial intelligence [ nineties ] made by IBM and they challenge the world chess champ Kasparov , Who win ?
How many moves are there in chess? There are over9 million different possible positions after three moves each.So that mean most of the time possibilities are the problem ,Who is faster in possibilities Computer or human ? you can make billion possible move in a small database now a days . so it just a fast find or fast search on database , so it is so easy to any ai to calculate possible of possible of possible of possible ..etc [ impossible with brain only ] , that's why remembering is not the most important thing in human [ its important of course ] and that's what our doctors just focus on ? , so if you make ai program that here doctors at class rooms every day at usa university , uk universities , Germany and japan ..etc , how many possibilities are there for ai to not teach and answer questions right , especially at any country that's not improve there courses subjects and doctors skills ?
the only problem in Egypt for ai will be that ai is not the boss cousin, so he will not be haired
don't be angry from this answer , and don't Confidence in yourself , because our brain more than just possibilities , we create the computer , and just ask your self just one question can computer discover what quran gives new to us every day , ask you self why only human can discover it , bazn allah
Sir I believe the only difference is decision making. Artificial intelligence can help human intelligence to make an informed decision.
I believe in intelligence but the human intelligence needs experience in our life and knowlege to be developed. Intelligence can be used in the critical situations to solve a problem ,to save your self and to achieve your goals. but without knowledge and experience, intelligence is useless.
i fully agree with all experts thank you
they both learn the same way, though