Start networking and exchanging professional insights

Register now or log in to join your professional community.

Follow

What is new in deep neural networks in computer science?

user-image
Question added by Lina Samer , Digital Media Graphic Designer , iDirection
Date Posted: 2016/08/02
Ghada Eweda
by Ghada Eweda , Medical sales hospital representative , Pfizer pharmaceutical Plc.

deep neutral networks

Since the invention of the computer, there have been people talking about the things that computers will never be able to do.

Whether it was beating a grand master at chess  these predictions have always been wrong. However, some such nay-saying always had a better grounding in computer science. There were goals that, if you knew how computers worked, you knew they would be virtually impossible to achieve. Recognizing human emotions through facial expressions. Reading a wide variety of cursive handwriting. Correctly identifying the words in spoken language. Driving autonomously through busy streets.

Well, computers are now starting to be able to do all of those things, and quite a bit more.

Were the nay-sayers really just too cynical about the true capabilities of digital computers? In a way, no.  To solve those monumental challenges, scientists were forced to come up with a whole new type of computer, one based on the structure of the brain. These artificial neural networks (ANNs) only ever exist as a simulation running on a regular digital computer, but what goes on inside that simulation is fundamentally very different from classical computing.

Yeah, the new is that an artificial neural network an exercise in computing science, Applied biology, Pure mathematics, Experimental philosophy, It’s all of those things, and much more.

OK, but what can ANNs actually do 

The usefulness of ANNs falls into one of two basic categories: as tools for solving problems that are inherently difficult for both people and digital computers, and as experimental and conceptual models of something — classically, brains. Let’s talk about each one separately.

First, the real reason for interest (and, more importantly, investment) in ANNs is that they can solve problems. Google uses an ANN to learn how to better target “watch next” suggestions after YouTube videos. The scientists at the Large Hadron Collider turned to ANNs to sift the results of their collisions and pull the signature of just one particle out of the larger storm. Shipping companies use them to minimize route lengths over a complex scattering of destinations. Credit card companies use them to identify fraudulent transactions. They’re even becoming accessible to smaller teams and individuals — Amazon, MetaMind, and more are offering tailored machine learning services to anyone for surprisingly modest a fee.

What an ANN thinks dumbbells look like, from training with photos.

What an ANN thinks dumbbells look like, from training with photos.

Things are just getting started. Google’s been training its photo-analysis algorithms with more and more pictures of animals, and they’re getting pretty good at telling dogs from cats in regular photographs. Both translation and voice synthesis are progressing to the point that we could soon have a babelfish-like device offering natural, real time conversations between people speaking different languages. And, of course, there are the Big Three ostentatious examples that really wear the machine learning on their sleeve: Siri, Now, and Cortana.

The other side of a neural network lies in carefully designing it to mirror the structure of brains. Both our understanding of that structure, and the computational power necessary to simulate it, are nowhere close to what we’d need to do robust brain-science in a computer model. There have been some amazing efforts at simulating certain aspects of certain portions of the brain, but it’s still in the very preliminary stages.

One advantage of this approach is that while you can’t (or… shouldn’t) genetically engineer humans to have an experimental change built into their brains, you absolutely can perform such mad-scientist experiments on simulated brains. ANNs can explore a far wider array of possibilities than medicine could ever practically or ethically consider, and they could someday allow scientists to quickly check on more out-there, “I wonder” hypotheses with potentially unexpected results.

When you ask yourself, “Can an artificial neural network do it?” immediately after, ask yourself “Can I do it?” If the answer is yes, then your brain must be capable of doing something that an ANN might one day be able to simulate. On the other hand, there are plenty of things an ANN might one day be able to do that a brain never could.One advantage of this approach is that while you can’t (or… shouldn’t) genetically engineer humans to have an experimental change built into their brains, you absolutely can perform such mad-scientist experiments on simulated brains. ANNs can explore a far wider array of possibilities than medicine could ever practically or ethically consider, and they could someday allow scientists to quickly check on more out-there, “I wonder” hypotheses with potentially unexpected results.

When you ask yourself, “Can an artificial neural network do it?” immediately after, ask yourself “Can I do it?” If the answer is yes, then your brain must be capable of doing something that an ANN might one day be able to simulate. On the other hand, there are plenty of things an ANN might one day be able to do that a brain never could.

I think that, The potential for ANNs is nearly limitless.

alaa liswe
by alaa liswe , ِAdministrative Assistant , Arab Open University

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well.

Home Front
by Home Front , فني تكنولوجيا المعلومات , multimedia Center Systems

A Quick Look on Neural NetworksIs a computer system modeled on the human brain and nervous system.

A neural network is a computer architecture modeled on the human brain, consisting of nodes connected to each other by links of differing strengths   Find Moreieeexplore.ieee.org

Bassam Ali Mohammed Al-mamari
by Bassam Ali Mohammed Al-mamari , مساعد الرصد والتقييم , برودجي سيستمز

this site can be help:

http://deeplearning4j.org/neuralnet-overview.html

thank you for invitation

eman abdelgawad mohamed
by eman abdelgawad mohamed , خدمه عملاء , البنك الاهلى المصرى

Are deep neural networks creative? It seems like a reasonable question. Google's "Inceptionism" technique transforms images, iteratively modifying them to enhance the activation of specific neurons in a deep net. The images appear trippy, transforming rocks into buildings or leaves into insects. Another neural generative model, introduced by Leon Gatys of the University of Tubingen in Germany, can extract the style from one image (say a painting by Van Gogh), and apply it to the content of another image (say a photograph).

Generative adversarial networks (GANs), introduced by Ian Goodfellow, are capable of synthesizing novel images by modeling the distribution of seen images. Additionally, character-level recurrent neural network (RNN) language models now permeate the internet, appearing to hallucinatepassages of Shakespeare, Linux source code, and even Donald Trump's hateful Twitter ejaculations. Clearly, these advances emanate from interesting research and deserve the fascination they inspire.

Michael Balogun
by Michael Balogun , Technical Support Audio , Success Power international

Neural networks is not as smart as the human brain, but in a smart manner it takes a large number of handwritten digits, known as training examples, it then increases the number of training examples thereby improving its accuracy. Note that this is not restricted to digits alone.

Farhan Akhtar
by Farhan Akhtar , Lab Assistan Science , Fouji foundation model school jhelum punjab pakistan

Deep learning is a branch of machine learning based on a set of algorithms that attempt to ... Various deep learning architectures such as deep neural networks, ... and recurrent neural networks have been applied to fields like computer vision, ..... tasks are constantly being improved with new applications of deep learning.

Nisith Mondal
by Nisith Mondal , IT SUPPORT EXECUTIVE , M/S K.K. Paul Enterprise

An artificial neural network (NN for short) is a classifier. In supervised machine learning, classification is one of the most prominent problems. The aim is to assort objects into classes (terminology not to be confused with Object Oriented programming). Classification has a broad domain of applications, for example:

  • in image processing we may seek to distinguish images depicting different kinds (classes) of objects (e.g. cars, bikes, buildings etc) or different persons,
  • in natural language processing (NLP) we may seek to classify texts into categories (e.g. distinguish texts that talk about politics, sports, culture etc),
  • in financial transactions processing we may seek to decide if a new transaction is legitimate or fraudulent.

The term "supervised" refers to the fact that the algorithm is previously trained with "tagged" examples for each category (i.e. examples whose classes are made known to the NN) so that it learns to classify new, unseen examples in the future.

In simple terms, a classifier accepts a number of inputs, which are called features and collectively describe an item to be classified (be it a picture, text, transaction or anything else as discussed previously), and outputs the class it believes the item belongs to. For example, in an image recognition task, the features may be the array of pixels and their colors. In an NLP problem, the features are the words in a text. In finance several properties of each transaction such as the daytime, cardholder’s name, the billing and shipping addresses, the amount etc.

It is important to understand that here we assume that there is an underlying real relationship between the characteristics of an item and the class it belongs to. The goal of running a NN is: Given a number of examples, try and come up with a function that resembles this real relationship (Of course, you’ll say: you are geeks, you are better with functions than relationships!) This function is called the predictive model or just the model because it is a practical, simplified version of how items with certain features belong to certain classes in the real world. Get comfy with using the word “function” as it comes up quite often, it is a useful abstraction for the rest of the conversation (no maths involved). You might be interested to know that a big part of the work that Data Scientists do (the dudes that work on such problems) is to figure out exactly which are the features that better describe the entities of the problem at hand, which is similar to saying which characteristics seem to distinguish items of one class from those of another. This process is called feature selection.

A NN does exactly that. It is a structure used for classification, which consists of several components interconnected and organized in layers. These components are called artificial neurons (ANs). Each AN is itself a classifier, only a simpler one whose ability to classify is limited when used for complex problems. It turns out that we can completely overcome the limitations of simple classifiers by interconnecting a number of ANs to form powerful NNs. Think of it as an example of the principle Unite and Lead.

main-qimg-fe9ee23b6783879a076cbbdaa5e804

This structure of a combination of inputs that go through the artificial neuron resembles the functionality of a physical neuron in the brain, thus the name. In the following picture the structure of a physical and an artificial neuron are compared: the AN is shown as two nodes to illustrate the internals of a Logistic Regression AN. An AN combines the inputs and then applies what is called the activation function (depicted below as a step function), but it is usually represented as one node, as above.

  • The inputs of the AN correspond to the dendrites,
  • the AN itself (sum + activation) to the body/nucleus and
  • the output to the axon.
main-qimg-d5bea483281da30c00910b5bdc2f2e

The analogy goes deeper as neurons are known to provide human brain with a "generic learning algorithm": By re-wiring various types of sensory data to a brain region, the same region can learn to recognize different types of input. E.g. the brain region responsible for hearing can learn to see with the appropriate sensory re-wiring from the eyes to the hearing region.

Similarly ANs organized in NNs provide a generic algorithm in principle capable of learning to distinguish any classes. So, going back to the example applications in the beginning of this answer, you can use the same NN principles to classify pictures, texts or transactions. For a better understanding, read on.

At this point you must be wondering what on earth is an activation function. In order to understand this we need to recall what a NN tries to compute: A function that takes an example described by its features as an input and outputs the likelihood that the example falls into each one of the classes. What the activation function does is to take as an input the sum of these feature values and transform it to a form that can be used as a component of the output function. When multiple such components from all the ANs of the network are combined, the goal output function is constructed.

Historically the S-curve (aka the sigmoid function) has been used as the activation function in NNs (although better functions are now known). This choice relates to yet another biologically inspired analogy. Before explaining it, let’s see first how it looks (think of it as what happens when you can’t get the temperature in the shower right: first it’s too cold despite larger adjustment attempts and then it quickly turns hot with smaller adjustment attempts):

main-qimg-ad8fb16c5468f9f6fc1bf34686e568

Now the bio analogy: brain neurons activate more frequently as their electric input stimulae increases. The relationship of the activation frequency as a result of the input voltage is an S-curve. However the S-curve is more pervasive in nature than just that, it is the curve of all kinds of phase transitions.

As mentioned, a NN is organized in layers of interconnected ANs (in the following picture layers are depicted with different colors).

main-qimg-37c02c6cb9522797e6a4d15be653e2
  • Its input layer consists of a number of ANs that depends on the number of input features. Features are engineered to describe the class instances, be them car images, texts, transactions etc, depending on the application. E.g. in an image recognition task, the features may be the array of pixels and their colors.
  • Its output layer consisting often of a number of ANs equal to the number of classes in the problem. When given a new, unseen example, each AN of the output layer assigns a probability that this example belongs to each particular class, based on its training.
  • Between the input and output layers, there may be several hidden layers of ANs (for reasons briefly described next), but for many problems one or two hidden layers are enough.

Training is often done with the Back Propagation algorithm. During BackProp, the NN is fed with examples of all classes. As mentioned, the training examples are said to be "tagged", meaning that the NN is given both the example (as described by its features) and the class it really belongs to. Given many such training examples, the NN constructs, during training, the model, i.e. a probabilistic mapping of certain features (input) to classes (output). The model is reflected on the weighs of the AN connectors (see previous figure); BackProp's job is to compute these weighs. Based on the constructed model, the NN will classify new untagged examples (i.e. instances that it has not seen during training), aka it will predict the probability of a new example belonging to each class. Therefore there are fundamentally two distinct phases:

  • During training, the NN is fed with several tagged examples from which it constructs its model.
  • During testing, the NN classifies new, unknown instances into the known classes, based on the constructed model.

syed maratib
by syed maratib , Metarial engineer , Dascon Construction company pvt.ltd

An artificial neural network (NN for short) is a classifier. In supervised machine learning, classification is one of the most prominent problems. The aim is to assort objects into classes (terminology not to be confused with Object Oriented programming). Classification has a broad domain of applications, for example: in image processing we may seek to distinguish images depicting different kinds (classes) of objects (e.g. cars, bikes, buildings etc) or different persons, in natural language processing (NLP) we may seek to classify texts into categories (e.g. distinguish texts that talk about politics, sports, culture etc),

Were the nay-sayers really just too cynical about the true capabilities of digitalcomputers? In a way, no. To solve those monumental challenges, scientists were forced to come up with a whole new type of computer, one based on the structure of the brain. These artificial neural networks (ANNs) only ever ...

Raymond Castillo
by Raymond Castillo , System Administrator , Aswaq Trading Company

‘Deep Learning’ dominated the ML field in 2000s after datasets blew up in size (Pioneered by ImageNet: 14m+ images from 21k+ categories), processing power became cheap and abundant (CPU to GPU, and now TensorPU by Google for TensorFlow; Google’s Partially Open-Source Machine Learning Framework) and the progress seemed inevitable. Convolutional Neural Network is the hottest tool in Deep Learning due to their inherent ability to perform well on Image(Grid) Data and the fact that they show invariance to distortions in images which is exactly what makes them powerful.

More Questions Like This